There is an option to keep all targets (Packages, Sources, …) compressed
for a while now, but the all-or-nothing approach is a bit limited for
our purposes with additional targets as some of them are very big
(Contents) and rarely used in comparison, so keeping them compressed by
default can make sense, while others are still unpacked.
Most interesting is the copy-change maybe: Copy is used by the acquire
system as an uncompressor and it is hence expected that it returns the
hashes for the "output", not the input. Now, in the case of keeping a
file compressed, the output is never written to disk, but generated in
memory and we should still validated it, so for compressed files copy is
expected to return the hashes of the uncompressed file. We used to use
the config option to enable on-the-fly decompress in the method, but in
reality copy is never used in a way where it shouldn't decompress a
compressed file to get its hashes, so we can save us the trouble of
sending this information to the method and just do it always.
remove the longtime deprecated vendor{,list} stuff
History suggests that this comes from an earlier apt-secure
implementation, but never really became a thing, totally unused and
marked as deprecated for "ages" now. Especially as it did nothing even
if it would have been used (libapt itself didn't use it at all).
Limits which key(s) can be used to sign a repository. Not immensely useful
from a security perspective all by itself, but if the user has
additional measures in place to confine a repository (like pinning) an
attacker who gets the key for such a repository is limited to its
potential and can't use the key to sign its attacks for an other (maybe
less limited) repository… (yes, this is as weak as it sounds, but having
the capability might come in handy for implementing other stuff later).
add sources.list Check-Valid-Until and Valid-Until-{Max,Min} options
These options could be set via configuration before, but the connection
to the actual sources is so strong that they should really be set in the
sources.list instead – especially as this can be done a lot more
specific rather than e.g. disabling Valid-Until for all sources at once.
Valid-Until-* names are chosen instead of the Min/Max-ValidTime as this
seems like a better name and their use in the wild is probably low
enough that this isn't going to confuse anyone if we have to names for
the same thing in different areas.
In the longrun, the config options should be removed, but for now
documentation hinting at the new options is good enough as these are the
kind of options you set once across many systems with different apt
versions, so the new way should work everywhere first before we
deprecate the old way.
indexRecords was used to parse the Release file – mostly the hashes –
while metaIndex deals with downloading the Release file, storing all
indexes coming from this release and … parsing the Release file, but
this time mostly for the other fields.
That wasn't a problem in metaIndex as this was done in the type specific
subclass, but indexRecords while allowing to override the parsing method
did expect by default a specific format.
APT isn't really supporting different types at the moment, but this is
a violation of the abstraction we have everywhere else and, which is the
actual reason for this merge: Options e.g. coming from the sources.list
come to metaIndex naturally, which needs to wrap them up and bring them
into indexRecords, so the acquire system is told about it as they don't
get to see the metaIndex, but they don't really belong in indexRecords
as this is just for storing data loaded from the Release file… the
result is a complete mess.
I am not saying it is a lot prettier after the merge, but at least
adding new options is now slightly easier and there is just one place
responsible for parsing the Release file. That can't hurt.
detect and error out on conflicting Trusted settings
A specific trust state can be enforced via a sources.list option, but it
effects all entries handled by the same Release file, not just the entry
it was given on so we enforce acknowledgement of this by requiring the
same value to be (not) set on all such entries.
bring back deb822 sources.list entries as .sources
Having two different formats in the same file is very dirty and causes
external tools to fail hard trying to parse them. It is probably not a
good idea for them to parse them in the first place, but they do and we
shouldn't break them if there is a better way.
So we solve this issue for now by giving our deb822 format a new
filename extension ".sources" which unsupporting applications are likely
to ignore an can begin gradually moving forward rather than waiting for
the unknown applications to catch up.
Currently and for the forseeable future apt is going to support both
with the same feature set as documented in the manpage, with the
longtime plan of adopting the 'new' format as default, but that is a
long way to go and might get going more from having an easier time
setting options than from us pushing it explicitely.
We support arch= for a while, now we finally add lang= as well and as a
first simple way of controlling which targets to acquire also target=.
This asked for a redesign of the internal API of parsing and storing
information about 'deb' and 'deb-src' lines. As this API isn't visible
to the outside no damage done through.
Beside being a nice cleanup (= it actually does more in less lines) it
also provides us with a predictable order of architectures as provides
in the configuration rather than based on string sorting-order, so that
now the native architecture is parsed/displayed first. Observeable e.g.
in apt-get output.
apt manpage is built from xml nowadays like the rest
It used be a handwritten manpage, but that is gone and this artifact is
the cause for the message:
../../buildlib/manpage.mak:23: target '../../build/docs/apt.de.8' given
more than once in the same rule
[ … repeated for all translations … ]
So lets get right of it.
The old check is overly complicated nowadays as we have a pretty
defining difference between packages from a Packages files coming
from with a Release file (even if the file itself doesn't exist) and
packages coming from the dpkg.status or directly out of *.deb's
as these have no associated Release file.
cleanup Container.erase API to look more like std::containers
C++11 slightly changes the API again to const_iterator, but we are find
with iterators in the C++03 style for now as long as they look and
behave equally to the methods of the standard containers.
Doing this disables the implicit copy assignment operator (among others)
which would cause hovac if used on the classes as it would just copy the
pointer, not the data the d-pointer points to. For most of the classes
we don't need a copy assignment operator anyway and in many classes it
was broken before as many contain a pointer of some sort.
Only for our Cacheset Container interfaces we define an explicit copy
assignment operator which could later be implemented to copy the data
from one d-pointer to the other if we need it.
Determine the candidate based on per-version pins, instead of old code
The new implementation assigns each version a pin, instead of assigning
the pin to a package. This enables us to give each version of a package
a different priority.
add d-pointer, virtual destructors and de-inline de/constructors
To have a chance to keep the ABI for a while we need all three to team
up. One of them missing and we might loose, so ensuring that they are
available is a very tedious but needed task once in a while.
allow ratelimiting progress reporting for testcases
Progress reports once in a while which is a bit to unpredictable for
testcases, so we enforce a steady progress for them in the hope that
this makes the tests (mostly test-apt-progress-fd) a bit more stable.
condense parallel requests with the same hashes to one
It shouldn't be too common, but sometimes people have multiple mirrors
in the sources or otherwise repositories with the same content. Now that
we gracefully can handle multiple requests to the same URI, we can also
fold multiple requests with the same expected hashes into one. Note that
this isn't trying to find oppertunities for merging, but just merges if
it happens to encounter the oppertunity for it.
This is most obvious in the new testcase actually as it needs to delay
the action to give the acquire system enough time to figure out that
they can be merged.
Again, consistency is the main sellingpoint here, but this way it is now
also easier to explain that some files move through different stages and
lines are printed for them hence multiple times: That is a bit hard to
believe if the number is changing all the time, but now that it keeps
consistent.
All other methods call it, so they should follow along even if the work
they do afterwards is hardly breathtaking and usually results in a
URIDone pretty soon, but the acquire system tells the individual item
about this via a virtual method call, so even through none of our
existing items contains any critical code in these, maybe one day they
might. Consistency at least once…
Which is also why this has a good sideeffect: file: and cdrom: requests
appear now in the 'apt-get update' output. Finally - it never made sense
to hide them for me. Okay, I guess it made before the new hit behavior,
but now that you can actually see the difference in an update it makes
sense to see if a file: repository changed or not as well.
deal better with acquiring the same URI multiple times
This is an unlikely event for indexes and co, but it can happen quiet
easily e.g. for changelogs where you want to get the changelogs for
multiple binary package(version)s which happen to all be built from a
single source.
The interesting part is that the Acquire system actually detected this
already and set the item requesting the URI again to StatDone - expect
that this is hardly sufficient: an Item must be Complete=true as well
to be considered truely done and that is only the tip of the ::Done
handling iceberg. So instead of this StatDone hack we allow QItems to be
owned by multiple items and notify all owners about everything now,
so that for the point of each item they got it downloaded just for them.
ensure valid or remove destination file in file method
'file' isn't using the destination file per-se, but returns another name
via "Filename" header. It still should deal with destination files as
they could exist (pkgAcqFile e.g. creates links in that location) and
are potentially bogus.
provide a public interface for acquiring changelogs
Provided is a specialized acquire item which given a version can figure
out the correct URI to try by itself and if not provides an error
message alongside with static methods to get just the URI it would try
to download if it should just be displayed or similar such.
The URI is constructed as follows:
Release files can provide an URI template in the "Changelogs" field,
otherwise we lookup a configuration item based on the "Label" or
"Origin" of the Release file to get a (hopefully known) default value
for now. This template should contain the string CHANGEPATH which is
replaced with the information about the version we want the changelog
for (e.g. main/a/apt/apt_1.1). This middleway was choosen as this path
part was consistent over the three known implementations (+1 defunct),
while the rest of the URI varies widely between them.
The benefit of this construct is that it is now easy to get changelogs
for Debian packages on Ubuntu and vice versa – even at the moment where
the Changelogs field is present nowhere. Strictly better than what
apt-get had before as it would even fail to get changelogs from
security… Now it will notice that security identifies as Origin: Debian
and pick this setting (assuming again that no Changelogs field exists).
If on the other hand security would ship its changelogs in a different
location we could set it via the Label option overruling Origin.
Translation-* files are internally handled as PackageFiles which isn't
super nice, but giving them their own struct is a bit overkill so let it
be for the moment. They always appeared in the policy output because of
this through and now that they are properly linked to a ReleaseFile they
even display all the pinning information on them, but they don't contain
any packages which could be pinned… No problem, but useless and
potentially confusing output.
Adding a 'NoPackages' flag which can be set on those files and be used
in applications seems like a simple way to fix this display issue.
This is mainly visible in the policy, so that you can now pin by b= and
let it only effect Packages files of this architecture and hence the
packages coming from it (which do not need to be from this architecture,
but very likely are in a normal repository setup).
If you should pin by architecture in this way is a different question…
Selecting targets based on the Release they belong to isn't to
unrealistic. In fact, it is assumed to be the most used case so it is
made the default especially as this allows to bundle another thing we
have to be careful with: Filenames and only showing targets we have
acquired.
We used to read the Release file for each Packages file and store the
data in the PackageFile struct even through potentially many Packages
(and Translation-*) files could use the same data. The point of the
exercise isn't the duplicated data through. Having the Release files as
first-class citizens in the Cache allows us to properly track their
state as well as allows us to use the information also for files which
aren't in the cache, but where we know to which Release file they
belong (Sources are an example for this).
This modifies the pkgCache structs, especially the PackagesFile struct
which depending on how libapt users access the data in these structs can
mean huge breakage or no visible change. As a single data point:
aptitude seems to be fine with this. Even if there is breakage it is
trivial to fix in a backportable way while avoiding breakage for
everyone would be a huge pain for us.
Note that not all PackageFile structs have a corresponding ReleaseFile.
In particular the dpkg/status file as well as *.deb files have not. As
these have only a Archive property need, the Component property takes
over this duty and the ReleaseFile remains zero. This is also the reason
why it isn't needed nor particularily recommended to change from
PackagesFile to ReleaseFile blindly. Sticking with the earlier is
usually the better option.
Downloading additional files is only half the job. We still need a way
to allow external tools to know where the files are they requested for
download given that we don't want them to choose their own location.
'apt-get files' is our answer to this showing by default in a deb822
format information about each IndexTarget with the potential to filter
the records based on lines and an option to change the output format.
The command serves also as an example on how to get to this information
via libapt.
Removes a bunch of duplicated code in the deb-specific parts. Especially
the Description part is now handled centrally by IndexTarget instead of
being duplicated to the derivations of IndexFile.
It is a rather strange sight that index items use SiteOnly which strips
the Path, while e.g. deb files are downloaded with NoUserPassword which
does not. Important to note here is that for the file transport Path is
pretty important as there is no Host which would be displayed by Site,
which always resulted in "interesting" unspecific errors for "file:".
Adding a 'middle' ground between the two which does show the Path but
potentially modifies it (it strips a pending / at the end if existing)
solves this "file:" issue, syncs the output and in the end helps to
identify which file is meant exactly in progress output and co as a
single site can have multiple repositories in different paths.
rename Calculate- to GetIndexTargets and use it as official API
We need a general way to get from a sources.list entry to IndexTargets
and with this change we can move from pkgSourceList over the list of
metaIndexes it includes to the IndexTargets each metaIndex can have.
stop using IndexTarget pointers which are never freed
Creating and passing around a bunch of pointers of IndexTargets (and of
a vector of pointers of IndexTargets) is probably done to avoid the
'costly' copy of container, but we are really not in a timecritical
operation here and move semantics will help us even further in the
future. On the other hand we never do a proper cleanup of these
pointers, which is very dirty, even if structures aren't that big…
The changes will effecting many items only effect our own hidden class,
so we can do that without fearing breaking interfaces or anything.
abstract the code to iterate over all targets a bit
We have two places in the code which need to iterate over targets and do
certain things with it. The first one is actually creating these targets
for download and the second instance pepares certain targets for
reading.
configureable acquire targets to download additional files
First pass at making the acquire system capable of downloading files
based on configuration rather than hardcoded entries. It is now possible
to instruct 'deb' and 'deb-src' sources.list lines to download more than
just Packages/Translation-* and Sources files. Details on how to do that
can be found in the included documentation file.
The code requires every index file we download to have a Package field,
but that doesn't hold true for all index we might want to download in
the future. Some might not even be deb822 formatted files…
The check was needed as apt used to accept unverifiable files like
Translation-*, but nowadays it requires hashes for these as well. Even
for unsigned repositories we interpret the Release file as binding now,
which means this check isn't triggerable expect for repositories which
do not have a Release file at all – something which is highly discouraged!
If we have a file on disk and the hashes are the same in the new Release
file and the old one we have on disk we know that if we ask the server
for the file, we will at best get an IMS hit – at worse the server
doesn't support this and sends us the (unchanged) file and we have to
run all our checks on it again for nothing. So, we can save ourselves
(and the servers) some unneeded requests if we figure this out on our
own.
Its a bit unclean to create an item just to let the item decide that it
can't do anything and let it fail, so instead we let the item creator
decide in all cases if patching should be attempted.
Also pulls a small trick to get the hashes for the current file without
calculating them by looking at the 'old' Release file if we have it.
At the moment we only have hashes for the uncompressed pdiff files, but
via the new '$HASH-Download' field in the .diff/Index hashes can be
provided for the .gz compressed pdiff file, which apt will pick up now
and use to verify the download. Now, we "just" need a buy in from the
creators of repositories…
The rred parser is very accepting regarding 'invalid' files. Given that
we can't trust the input it might be a bit too relaxed. In any case,
checking for more errors can't hurt given that we support only a very
specific subset of ed commands.
check patch hashes in rred worker instead of in the handler
rred is responsible for unpacking and reading the patch files in one go,
but we currently only have hashes for the uncompressed patch files, so
the handler read the entire patch file before dispatching it to the
worker which would read it again – both with an implicit uncompress.
Worse, while the workers operate in parallel the handler is the central
orchestration unit, so having it busy with work means the workers do
(potentially) nothing.
This means rred is working with 'untrusted' data, which is bad. Yet,
having the unpack in the handler meant that the untrusted uncompress was
done as root which isn't better either. Now, we have it at least
contained in a binary which we can harden a bit better. In the long run,
we want hashes for the compressed patch files through to be safe.
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
If we e.g. fail on hash verification for Packages.xz its highly unlikely
that it will be any better with Packages.gz, so we just waste download
bandwidth and time. It also causes us always to fallback to the
uncompressed Packages file for which the error will finally be reported,
which in turn confuses users as the file usually doesn't exist on the
mirrors, so a bug in apt is suspected for even trying it…
Helmut Grohne [Mon, 9 Mar 2015 17:11:10 +0000 (18:11 +0100)]
parse arch-qualified Provides correctly
The underlying problem is that libapt-pkg does not correctly parse these
provides. Internally, it creates a version named "baz:i386" with
architecture amd64. Of course, such a package name is invalid and thus
this version is completely inaccessible. Thus, this bug should not cause
apt to accept a broken situation as valid. Nevertheless, it prevents
using architecture qualified depends.
Michael Vogt [Fri, 22 May 2015 13:28:53 +0000 (15:28 +0200)]
Fix endless loop in apt-get update that can cause disk fillup
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
treat older Release files than we already have as an IMSHit
Valid-Until protects us from long-living downgrade attacks, but not all
repositories have it and an attacker could still use older but still
valid files to downgrade us. While this makes it sounds like a security
improvement now, its a bit theoretical at best as an attacker with
capabilities to pull this off could just as well always keep us days
(but in the valid period) behind and always knows which state we have,
as we tell him with the If-Modified-Since header. This is also why this
is 'silently' ignored and treated as an IMSHit rather than screamed at
the user as this can at best be an annoyance for attackers.
An error here would 'regularily' be encountered by users by out-of-sync
mirrors serving a single run (e.g. load balancer) or in two consecutive
runs on the other hand, so it would just help teaching people ignore it.
That said, most of the code churn is caused by enforcing this additional
requirement. Crisscross from InRelease to Release.gpg is e.g. very
unlikely in practice, but if we would ignore it an attacker could
sidestep it this way.
detect Releasefile IMS hits even if the server doesn't
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
It isn't used much compared to what the methodname suggests, but in the
remaining uses it can't hurt to check more than strictly necessary by
calculating and verifying with all hashes we can compare with rather
than "just" the best known hash.
detect 416 complete file in partial by expected hash
If we have the expected hashes we can check with them if the file we
have in partial we got a 416 for is the expected file. We detected this
with same-size before, but not every server sends a good Content-Range
header with a 416 response.
rewrite all TFRewrite instances to use the new pkgTagSection::Write
While it is mostly busywork to rewrite all instances it actually fixes
bugs as the data storage used by the new method is std::string rather
than a char*, the later mostly created by c_str() from a std::string
which the caller has to ensure keeps in scope – something apt-ftparchive
actually didn't ensure and relied on copy-on-write behavior instead
which c++11 forbids and hence the new default gcc abi doesn't use it.
TFRewrite is okay, but it has obscure limitations (256 Tags), even more
obscure bugs (order for renames is defined by the old name) and the
interface is very c-style encouraging bad usage like we do it in
apt-ftparchive passing massive amounts of c_str() from std::string in.
The old-style is marked as deprecated accordingly. The next commit will
fix all places in the apt code to not use the old-style anymore.
In 66c3875df391b1120b43831efcbe88a78569fbfe we workaround/fixed a
problem where the code makes the assumption that the compiler uses
copy-on-write implementations for std::string. Turns out that for c++11
compatibility gcc >= 5 will stop doing this by default.
dpkg and dak know various field names and order them in their output,
while we have yet another order and have to play catch up with them as
we are sitting between chairs here and neither order is ideal for us,
too.
A little testcase is from now on supposed to help ensureing that we do
not derivate to far away from which fields dpkg knows and orders.
fix 'Source' to 'Package' rename in apt-ftparchive
This rename with value is ordered by the 'old' name 'Source', but should
be ordered by the new name… by splitting the operation in a delete and a
new field we can easily fix this problem locally for now.
drop incorrect parameter implicitely converted to bool
The helper expects to be told if it should generate messages, not where
these messages should be printed – as it isn't printing such messages,
but puts them in _error. apt-get uses in other methods a helper
specialisation which does also print stuff to a stream through, so this
is likely a copy&paste error.
remove available file to have same dpkg -l behavior
dpkg -l < 1.16.2 loads the available file and hence sees a package which
later versions do not see, leading to failures on travis-ci.
The different versions also have slightly different messages.
If the pin for a generic pin is 0, it get a value by strange looking
rules, if the pin is specific the rules are at least not strange, but
the value 989 is a magic number without any direct meaning… but both
never happens in practice as the parsing skips such entries with a
warning, so there always is a priority != 0 and the code therefore never
used.
The documentation says this, but the code only agreed while evaluating
specific packages, but not generics. These needed a pin above 1000 to
have the same effect.
The code causing this makes references to a 'second pesduo status file',
but nowhere is explained what this might stand for and/or what it was,
so we do the only reasonable thing: Remove all references and do as
documented.
improve partial/ cleanup in abort and failure cases
Especially pdiff-enhanced downloads have the tendency to fail for
various reasons from which we can recover and even a successful download
used to leave the old unpatched index in partial/.
By adding a new method responsible for making the transaction of an
individual file happen we can at specialisations especially for abort
cases to deal with the cleanup.
This also helps in keeping the compressed indexes around if another
index failed instead of keeping the decompressed files, which we
wouldn't pick up in the next call.
remove "first package seen is native package" assumption
The fix for #777760 causes packages of foreign (and the native)
architectures, to be created correctly, but invalidates (like the
previously existing, but policy-forbidden architecture-less packages
we had to support for some upgrade scenarios) the assumption that the
first (and only) package in the cache for a single architecture system
must be the package for the native architecture (as, where should the
other architectures come from, right? Wrong.).
Depending on the order of parsing sources more or less packages can be
effected by this. The effects are strange (for apt it mostly effects
simulation/debug output, but also apt-mark on these specific packages),
which complicates debugging, but relatively harmless if understood as
most actions do not need direct named access to packages.
The problem is fixed by removing the single-arch special casing in the
paths who had them (Cache.FindPkg), so they use the same code as
multi-arch systems, which use them as a wrapper for Grp.FindPkg.
Note that single-arch system code was using Grp.FindPkg before as well
if a Grp structure was handily available, so we don't introduce new
untested code here: We remove more brittle special cases which are less
tested instead (this was planed to be done for Stretch anyhow).
Note further that the method with the assumption itself isn't fixed. As
it is a private method I opted for declaring it deprecated instead and
remove all its call positions. As it is private no-one can call this
method legally (thanks to how c++ works by default its still an exported
symbol through) and fixing it basically means reimplementing code we
already have in Grp.FindPkg.
Removing rather than fixing seems hence like a good solution.
a hit on Release files means the indexes will be hits too
If we get a IMSHit for the Transaction-Manager (= the InRelease file or
as its still supported fallback Release + Release.gpg combo) we can
assume that every file we would queue based on this manager, but already
have locally is current and hence would get an IMSHit, too. We therefore
save us and the server the trouble and skip the queuing in this case.
Beside speeding up repetative executions of 'apt-get update' this way we
also avoid hitting hashsum errors if the indexes are in fact already
updated, but the Release file isn't yet as it is the case on well
behaving mirrors as Release files is updated last.
The implementation is a bit harder than the theory makes it sound as we
still have to keep reverifying the Release files (e.g. to detect now expired
once to avoid an attacker being able to silently stale us) and have to
handle cases in which the Release file hits, but some indexes aren't
present (e.g. user added a new foreign architecture).
refactor calculation of final lists/ name from URI
Calculating the final name of an item which it will have after
everything is done and verified successfully is suprisingly complicated
as while they all follow a simple pattern, the URI and where it is
stored varies between the items.
With some (abibreaking) redesign we can abstract this similar to how it
is already down for the partial file location.
Checking Valid-Until on an unsigned Release file doesn't give us any
security brownie points as an attacker could just change the date and in
practice repositories with unsigned Release files will very likely not
have a Valid-Until date, but for symetry and the fact that being
unsigned is currently just a warning, while expired is a fatal error.
ensure lists/ files have correct permissions after apt-cdrom add
Its a bit unpredictable which permissons and owners we will encounter on
a CD-ROM (or a USB stick, as apt-cdrom is responsible for those too),
so we have to ensure in this codepath as well that everything is nicely
setup without waiting for a 'apt-get update' to fix up the (potential)
mess.
We do this in HTTP already to give the CPU some exercise while the disk
is heavily spinning (or flashing?) to store the data avoiding the need
to reread the entire file again later on to calculate the hashes – which
happens outside of the eyes of progress reporting, so you might ended up
with a bunch of https workers 'stuck' at 100% while they were busy
calculating hashes.
This is a bummer for everyone using apt as a connection speedtest as the
https method works slower now (not really, it just isn't reporting done
too early anymore).
Methods get told which hashes are expected by the acquire system, which
means we can use this list to restrict what we calculate in the methods
as any extra we are calculating is wasted effort as we can't compare it
with anything anyway.
Adding support for a new hash algorithm is therefore 'free' now and if a
algorithm is no longer provided in a repository for a file, we
automatically stop calculating it.
In practice this results in a speed-up in Debian as we don't have SHA512
here (so far), so we practically stop calculating it.
Servers who advertise that they close the connection get the 'Closes'
encoding flag, but this conflicts with servers who response with a
transfer-encoding (e.g. encoding) as it is saved in the same flag.
We have a better flag for the keep-alive (or not) of the connection
anyway, so we check this instead of the encoding.
This is in practice not much of a problem as real servers we talk to are
HTTP1.1 servers (with keep-alive) and there isn't much point in doing
chunked encoding if you are going to close anyway, but our simple
testserver stumbles over this if pressed and its a bit cleaner, too.
send Alt-* info for uncompressed based on any compressions
file sends information about the uncompressed file if it can find it as
well as for the compressed file. This was done only for gzip so far, but
we support more compression types. That this information isn't used a
lot is a different story.