eliminate dead file-provides code in cache generation
The code was never active in production, it just sits there collecting
dust and given that it is never tested probably doesn't even work
anymore the way it was supposed to be (whatever that was exactly in the
first place). So just remove it before I have to "fix" it again next
time.
elimate duplicated code in pkgIndexFile subclasses
Trade deduplication of code for a bunch of new virtuals, so it is
actually visible how the different indexes behave cleaning up the
interface at large in the process.
Sources are usually defined in sources.list (and co) and are pretty
stable, but once in a while a frontend might want to add an additional
"source" like a local .deb file to install this package (No support for
'real' sources being added this way as this is a multistep process).
We had a hack in place to allow apt-get and apt to pull this of for a
short while now, but other frontends are either left in the cold by this
and/or the code for it looks dirty with FIXMEs plastering it and has on
top of this also some problems (like including these 'volatile' sources
in the srcpkgcache.bin file).
So the biggest part in this commit is actually the rewrite of the cache
generation as it is now potentially a three step process. The biggest
problem with adding support now through is that this makes a bunch of
previously mostly unusable by externs and therefore hidden classes
public, so a bit of further tuneing on this now public API is in order…
just-in-time creation for (explicit) negative deps
Now that we deal with provides in a more dynamic fashion the last
remaining problem is explicit dependencies like 'Conflicts: foo' which
have to apply to all architectures, but creating them all at the same
time requires us to know all architectures ending up in the cache which
isn't needed to be the same set as all foreign architectures.
The effect is visible already now through as this prevents the creation
of a bunch of virtual packages for arch:all packages and as such also
many dependencies, just not very visible if you don't look at the stats…
Expecting the worst is easy to code, but has its disadvantages e.g.
by creating package structures which otherwise would have never
existed. By creating the provides instead at the time a package
structure is added we are well prepared for the introduction of partial
architectures, massive amounts of M-A:foreign (and :allowed) and co as
far as provides are concerned at least. We have something relatively
similar for dependencies already.
Many tests are added for both M-A states and the code cleaned to
properly support implicit provides for foreign architectures and
architectures we 'just' happen to parse.
Before MultiArch implicits weren't a thing, so they were hidden by
default by definition. Adding them for MultiArch solved many problems,
but having no reliable way of detecting which dependency (and provides)
is implicit or not causes problems everytime we want to output
dependencies without confusing our observers with unneeded
implementation details.
The really notworthy point here is actually that we keep now a better
record of how a dependency came to be so that we can later reason about
it more easily, but that is hidden so deep down in the library internals
that change is more the problems it solves than the change itself.
We store very few flags in the cache, so keeping storage space for 8 is
enough for all of them and still leaves a few unused bits remaining for
future extensions without wasting bytes for nothing.
We aren't and we will not be really compatible again with the previous
stable abi, so lets drop these markers (which never made it into a
released version) for good as they have outlived their intend already.
Cache generation needs a way of quickly iterating over the unique potion
of the dependencies to be able to share them. By linking them together
we can reduce the speed penality (~ 80%) with only a small reduction in
saved size (~ 20%).
Having dependency data separated from the link between version/package
and the dependency allows use to work on sharing the depdency data a bit
as it turns out that many dependencies are in fact duplicates. How many
are duplicates various heavily with the sources configured, but for a
single Debian release the ballpark is 2 duplicates for each dependency
already (e.g. libc6 counts 18410 dependencies, but only 45 unique). Add
more releases and the duplicates count only rises to get ~6 for 3
releases. For each architecture a user has configured which given the
shear number of dependencies amounts to MBs of duplication.
We can cut down on this number, but pay a heavy price for it: In my
many releases(3) + architectures(3) test we have a 10% (~ 0.5 sec)
increase in cache creationtime, but also 10% less cachesize (~ 10 MB).
Further work is needed to rip the whole benefits from this through, so
this is just the start.
DepCache functions are called a lot, so if we can squeeze some drops out
of them for free we should do so. Takes also the opportunity to remove
some whitespace errors from these functions.
With a bit of trickery and the Curiously recurring template pattern we
can free us from our use of virtual in the iterators were it is unneeded
bloat as we never deal with pointers to iterators and similar such.
show or-groups in not-installed recommends and suggests lists
Further abstracting our new ShowList allows to use it for containers of
strings as well giving us the option to implement an or-groups display
for the recommends and suggests lists which is a nice trick given that
it also helps with migrating the last remaining other cases of old
ShowList.
Housekeeping. This used to be embedded in apt-get directly, then moved
to into our (then new) private lib and now header and code get a proper
separation.
apt-get is displaying various lists of package names, which until now it
was building as a string before passing it to ShowList, which inserted
linebreaks at fitting points and showed a title if needed, but it never
really understood what it was working with. With the help of C++11 the
new generic knows not only what it works with, but generates the list on
the fly rather than asking for it and potentially discarding parts of
the input (= the non-default verbose display). It also doubles as a test
for how usable the CacheSets are with C++11.
The library(s) make an API break anyhow, so lets ensure we use gcc5 for
this break and enable c++11 as standard as gcc6 will use it as default
and should provide some API parts for c++11 – beside that it can't hurt
to use c++11 itself. We just have to keep our headers c++03 compatible
to not enforce a standrd bump in our reverse dependencies.
'files' is a bit too generic as a name for a command usually only used
programmatically (if at all) by developers, so instead of "wasting" this
generic name for this we use "indextargets" which is actually the name
of the datastructure the displayed data is stored in.
Along with this rename the config options are renamed accordingly.
Helmut Grohne rightly suggests on IRC now that there isn't much point in
getting the locks for root either as the output isn't in any way more
authoritive than without locking given that after this call the lock is
freed and any action can sneak in before we make the next call. So we
exchange no benefit for the disavantage of blocking real calls. This can
be especially confusing with the aliases --no-act and --just-print.
We do not print the message we print for users through as the non-root
users can be confronted with a lot more difference via unreadable files.
Redirectors like httpredir.debian.org orchestra the download from
multiple (hopefully close) mirrors while having only a single central
sources.list entry by using redirects. This has the effect that the
progress report always shows the source it started with, not the mirror
it ends up fetching from, which is especially problematic for error
reporting as having a report for a "Hashsum mismatch" for the redirector
URI is next to useless as nobody knows which URI it was really fetched
from (regardless of it coming from a user or via the report script) from
this output alone. You would need to enable debug output and hope for
the same situation to arise again…
We hence reuse the UsedMirror field of the mirror:// method and detect
redirects which change the site and declare this new site as the
UsedMirrror (and adapt the description).
The disadvantage is that there is no obvious mapping anymore (it is
relatively easy to guess through with some experience) from progress
lines to sources.list lines, so error messages need to take care to use
the Target description (rather than current Item description) if they
want to refer to the sources.list entry.
skip .diff/Index acquire if Release file was a hit
QuereURI already skips the aquire of the real file in such a case, but
it can't detect pdiffs this way. Those already have a handling if the
file wasn't changed in between two Release files, so we just add an
other check for a Release file hit here, too.
C++11 adds the 'override' specifier to mark that a method is overriding
a base class method and error out if not. We hide it in the APT_OVERRIDE
macro to ensure that we keep compiling in pre-c++11 standards.
By further abstracting the iterator templates we can wrap the reverse
iterators of the wrapped containers and share code in a way that
iterator creating is now more template intensive, but shorter in code.
The "problem" is mostly in the erase() definitions as they slightly
conflict and in pre-c++11 are not uniformly in different containers.
By differenciating based on the standard we can provide erase() methods
for both standards – and as the method is in a template and inline we
don't need to worry about symbols here.
The rest is adding wrappings for the new forward_list and unordered_set
containers and correcting our iterators to use the same trait as the
iterator they are wrapping instead of having all of them be simple
forward iterators. This allows the use of specialized algorithms which
are picked based on iterator_traits and implementing them all is simple
to do as we can declare all methods easily and only if they are called
they will generate errors (if the underlying iterator doesn't support
these).
implement Signed-By without using gpg for verification
The previous commit returns to the possibility of using just gpgv for
verification proposes. There is one problem through: We can't enforce a
specific keyid without using gpg, but our acquire method can as it
parses gpgv output anyway, so it can deal with good signatures from not
expected signatures and treats them as unknown keys instead.
If all keyrings are simple keyrings we can merge the keyrings with cat
rather than doing a detour over gpg --export | --import (see #790665),
which means 'apt-key verify' can do without gpg and just use gpgv as
before the merging change.
We declare this gpgv usage explicit now in the dependencies. This isn't
a new dependency as gnupg as well as debian-archive-keyring depend on
and we used it before unconditionally, just that we didn't declare it.
The handling of the merged keyring needs to be slightly different as our
merged keyring can end up containing the same key multiple times, but at
least currently gpg does remove only the first occurrence with
--delete-keys, so we move the handling to a if one is gone, all are gone
rather than an (implicit) quid pro quo or even no effect.
The output of gpg slightly changes in 2.1 which breaks the testcase, but
the real problem is that this branch introduces a new default keyring
format (which is called keybox) and mixing it with simple keyrings (the
previous default format) has various problems like failing in the keybox
to keyring import (#790665) or [older] gpgv versions not being able to
deal with keyboxes (and newer versions as well currently:
https://bugs.gnupg.org/gnupg/issue2025).
We fix this by being a bit more careful in who creates keyrings (aka: we
do it or we take a simple keyring as base) to ensure we always have a
keyring instead of a keybox. This way we can ensure that any version
combination of gpv/gpgv2 and gnupg/gnupg2 without doing explicit version
checks and use the same code for all of them.
It is sometimes handy to know how apt-key exactly called gpg, so adding
a pair of options to be able to see this if wanted is added. Two are
needed as some commands output is redirected to /dev/null, while sfor
others stdout is piped into another gpg call so in both cases you
wouldn't see all and hence you can choose.
There is an option to keep all targets (Packages, Sources, …) compressed
for a while now, but the all-or-nothing approach is a bit limited for
our purposes with additional targets as some of them are very big
(Contents) and rarely used in comparison, so keeping them compressed by
default can make sense, while others are still unpacked.
Most interesting is the copy-change maybe: Copy is used by the acquire
system as an uncompressor and it is hence expected that it returns the
hashes for the "output", not the input. Now, in the case of keeping a
file compressed, the output is never written to disk, but generated in
memory and we should still validated it, so for compressed files copy is
expected to return the hashes of the uncompressed file. We used to use
the config option to enable on-the-fly decompress in the method, but in
reality copy is never used in a way where it shouldn't decompress a
compressed file to get its hashes, so we can save us the trouble of
sending this information to the method and just do it always.
remove the longtime deprecated vendor{,list} stuff
History suggests that this comes from an earlier apt-secure
implementation, but never really became a thing, totally unused and
marked as deprecated for "ages" now. Especially as it did nothing even
if it would have been used (libapt itself didn't use it at all).
Limits which key(s) can be used to sign a repository. Not immensely useful
from a security perspective all by itself, but if the user has
additional measures in place to confine a repository (like pinning) an
attacker who gets the key for such a repository is limited to its
potential and can't use the key to sign its attacks for an other (maybe
less limited) repository… (yes, this is as weak as it sounds, but having
the capability might come in handy for implementing other stuff later).
add sources.list Check-Valid-Until and Valid-Until-{Max,Min} options
These options could be set via configuration before, but the connection
to the actual sources is so strong that they should really be set in the
sources.list instead – especially as this can be done a lot more
specific rather than e.g. disabling Valid-Until for all sources at once.
Valid-Until-* names are chosen instead of the Min/Max-ValidTime as this
seems like a better name and their use in the wild is probably low
enough that this isn't going to confuse anyone if we have to names for
the same thing in different areas.
In the longrun, the config options should be removed, but for now
documentation hinting at the new options is good enough as these are the
kind of options you set once across many systems with different apt
versions, so the new way should work everywhere first before we
deprecate the old way.
indexRecords was used to parse the Release file – mostly the hashes –
while metaIndex deals with downloading the Release file, storing all
indexes coming from this release and … parsing the Release file, but
this time mostly for the other fields.
That wasn't a problem in metaIndex as this was done in the type specific
subclass, but indexRecords while allowing to override the parsing method
did expect by default a specific format.
APT isn't really supporting different types at the moment, but this is
a violation of the abstraction we have everywhere else and, which is the
actual reason for this merge: Options e.g. coming from the sources.list
come to metaIndex naturally, which needs to wrap them up and bring them
into indexRecords, so the acquire system is told about it as they don't
get to see the metaIndex, but they don't really belong in indexRecords
as this is just for storing data loaded from the Release file… the
result is a complete mess.
I am not saying it is a lot prettier after the merge, but at least
adding new options is now slightly easier and there is just one place
responsible for parsing the Release file. That can't hurt.
detect and error out on conflicting Trusted settings
A specific trust state can be enforced via a sources.list option, but it
effects all entries handled by the same Release file, not just the entry
it was given on so we enforce acknowledgement of this by requiring the
same value to be (not) set on all such entries.
bring back deb822 sources.list entries as .sources
Having two different formats in the same file is very dirty and causes
external tools to fail hard trying to parse them. It is probably not a
good idea for them to parse them in the first place, but they do and we
shouldn't break them if there is a better way.
So we solve this issue for now by giving our deb822 format a new
filename extension ".sources" which unsupporting applications are likely
to ignore an can begin gradually moving forward rather than waiting for
the unknown applications to catch up.
Currently and for the forseeable future apt is going to support both
with the same feature set as documented in the manpage, with the
longtime plan of adopting the 'new' format as default, but that is a
long way to go and might get going more from having an easier time
setting options than from us pushing it explicitely.
We support arch= for a while, now we finally add lang= as well and as a
first simple way of controlling which targets to acquire also target=.
This asked for a redesign of the internal API of parsing and storing
information about 'deb' and 'deb-src' lines. As this API isn't visible
to the outside no damage done through.
Beside being a nice cleanup (= it actually does more in less lines) it
also provides us with a predictable order of architectures as provides
in the configuration rather than based on string sorting-order, so that
now the native architecture is parsed/displayed first. Observeable e.g.
in apt-get output.
apt manpage is built from xml nowadays like the rest
It used be a handwritten manpage, but that is gone and this artifact is
the cause for the message:
../../buildlib/manpage.mak:23: target '../../build/docs/apt.de.8' given
more than once in the same rule
[ … repeated for all translations … ]
So lets get right of it.
The old check is overly complicated nowadays as we have a pretty
defining difference between packages from a Packages files coming
from with a Release file (even if the file itself doesn't exist) and
packages coming from the dpkg.status or directly out of *.deb's
as these have no associated Release file.
cleanup Container.erase API to look more like std::containers
C++11 slightly changes the API again to const_iterator, but we are find
with iterators in the C++03 style for now as long as they look and
behave equally to the methods of the standard containers.
Doing this disables the implicit copy assignment operator (among others)
which would cause hovac if used on the classes as it would just copy the
pointer, not the data the d-pointer points to. For most of the classes
we don't need a copy assignment operator anyway and in many classes it
was broken before as many contain a pointer of some sort.
Only for our Cacheset Container interfaces we define an explicit copy
assignment operator which could later be implemented to copy the data
from one d-pointer to the other if we need it.
Determine the candidate based on per-version pins, instead of old code
The new implementation assigns each version a pin, instead of assigning
the pin to a package. This enables us to give each version of a package
a different priority.
add d-pointer, virtual destructors and de-inline de/constructors
To have a chance to keep the ABI for a while we need all three to team
up. One of them missing and we might loose, so ensuring that they are
available is a very tedious but needed task once in a while.
allow ratelimiting progress reporting for testcases
Progress reports once in a while which is a bit to unpredictable for
testcases, so we enforce a steady progress for them in the hope that
this makes the tests (mostly test-apt-progress-fd) a bit more stable.
condense parallel requests with the same hashes to one
It shouldn't be too common, but sometimes people have multiple mirrors
in the sources or otherwise repositories with the same content. Now that
we gracefully can handle multiple requests to the same URI, we can also
fold multiple requests with the same expected hashes into one. Note that
this isn't trying to find oppertunities for merging, but just merges if
it happens to encounter the oppertunity for it.
This is most obvious in the new testcase actually as it needs to delay
the action to give the acquire system enough time to figure out that
they can be merged.
Again, consistency is the main sellingpoint here, but this way it is now
also easier to explain that some files move through different stages and
lines are printed for them hence multiple times: That is a bit hard to
believe if the number is changing all the time, but now that it keeps
consistent.
All other methods call it, so they should follow along even if the work
they do afterwards is hardly breathtaking and usually results in a
URIDone pretty soon, but the acquire system tells the individual item
about this via a virtual method call, so even through none of our
existing items contains any critical code in these, maybe one day they
might. Consistency at least once…
Which is also why this has a good sideeffect: file: and cdrom: requests
appear now in the 'apt-get update' output. Finally - it never made sense
to hide them for me. Okay, I guess it made before the new hit behavior,
but now that you can actually see the difference in an update it makes
sense to see if a file: repository changed or not as well.
deal better with acquiring the same URI multiple times
This is an unlikely event for indexes and co, but it can happen quiet
easily e.g. for changelogs where you want to get the changelogs for
multiple binary package(version)s which happen to all be built from a
single source.
The interesting part is that the Acquire system actually detected this
already and set the item requesting the URI again to StatDone - expect
that this is hardly sufficient: an Item must be Complete=true as well
to be considered truely done and that is only the tip of the ::Done
handling iceberg. So instead of this StatDone hack we allow QItems to be
owned by multiple items and notify all owners about everything now,
so that for the point of each item they got it downloaded just for them.
ensure valid or remove destination file in file method
'file' isn't using the destination file per-se, but returns another name
via "Filename" header. It still should deal with destination files as
they could exist (pkgAcqFile e.g. creates links in that location) and
are potentially bogus.
provide a public interface for acquiring changelogs
Provided is a specialized acquire item which given a version can figure
out the correct URI to try by itself and if not provides an error
message alongside with static methods to get just the URI it would try
to download if it should just be displayed or similar such.
The URI is constructed as follows:
Release files can provide an URI template in the "Changelogs" field,
otherwise we lookup a configuration item based on the "Label" or
"Origin" of the Release file to get a (hopefully known) default value
for now. This template should contain the string CHANGEPATH which is
replaced with the information about the version we want the changelog
for (e.g. main/a/apt/apt_1.1). This middleway was choosen as this path
part was consistent over the three known implementations (+1 defunct),
while the rest of the URI varies widely between them.
The benefit of this construct is that it is now easy to get changelogs
for Debian packages on Ubuntu and vice versa – even at the moment where
the Changelogs field is present nowhere. Strictly better than what
apt-get had before as it would even fail to get changelogs from
security… Now it will notice that security identifies as Origin: Debian
and pick this setting (assuming again that no Changelogs field exists).
If on the other hand security would ship its changelogs in a different
location we could set it via the Label option overruling Origin.
Translation-* files are internally handled as PackageFiles which isn't
super nice, but giving them their own struct is a bit overkill so let it
be for the moment. They always appeared in the policy output because of
this through and now that they are properly linked to a ReleaseFile they
even display all the pinning information on them, but they don't contain
any packages which could be pinned… No problem, but useless and
potentially confusing output.
Adding a 'NoPackages' flag which can be set on those files and be used
in applications seems like a simple way to fix this display issue.
This is mainly visible in the policy, so that you can now pin by b= and
let it only effect Packages files of this architecture and hence the
packages coming from it (which do not need to be from this architecture,
but very likely are in a normal repository setup).
If you should pin by architecture in this way is a different question…
Selecting targets based on the Release they belong to isn't to
unrealistic. In fact, it is assumed to be the most used case so it is
made the default especially as this allows to bundle another thing we
have to be careful with: Filenames and only showing targets we have
acquired.
We used to read the Release file for each Packages file and store the
data in the PackageFile struct even through potentially many Packages
(and Translation-*) files could use the same data. The point of the
exercise isn't the duplicated data through. Having the Release files as
first-class citizens in the Cache allows us to properly track their
state as well as allows us to use the information also for files which
aren't in the cache, but where we know to which Release file they
belong (Sources are an example for this).
This modifies the pkgCache structs, especially the PackagesFile struct
which depending on how libapt users access the data in these structs can
mean huge breakage or no visible change. As a single data point:
aptitude seems to be fine with this. Even if there is breakage it is
trivial to fix in a backportable way while avoiding breakage for
everyone would be a huge pain for us.
Note that not all PackageFile structs have a corresponding ReleaseFile.
In particular the dpkg/status file as well as *.deb files have not. As
these have only a Archive property need, the Component property takes
over this duty and the ReleaseFile remains zero. This is also the reason
why it isn't needed nor particularily recommended to change from
PackagesFile to ReleaseFile blindly. Sticking with the earlier is
usually the better option.
Downloading additional files is only half the job. We still need a way
to allow external tools to know where the files are they requested for
download given that we don't want them to choose their own location.
'apt-get files' is our answer to this showing by default in a deb822
format information about each IndexTarget with the potential to filter
the records based on lines and an option to change the output format.
The command serves also as an example on how to get to this information
via libapt.
Removes a bunch of duplicated code in the deb-specific parts. Especially
the Description part is now handled centrally by IndexTarget instead of
being duplicated to the derivations of IndexFile.
It is a rather strange sight that index items use SiteOnly which strips
the Path, while e.g. deb files are downloaded with NoUserPassword which
does not. Important to note here is that for the file transport Path is
pretty important as there is no Host which would be displayed by Site,
which always resulted in "interesting" unspecific errors for "file:".
Adding a 'middle' ground between the two which does show the Path but
potentially modifies it (it strips a pending / at the end if existing)
solves this "file:" issue, syncs the output and in the end helps to
identify which file is meant exactly in progress output and co as a
single site can have multiple repositories in different paths.
rename Calculate- to GetIndexTargets and use it as official API
We need a general way to get from a sources.list entry to IndexTargets
and with this change we can move from pkgSourceList over the list of
metaIndexes it includes to the IndexTargets each metaIndex can have.
stop using IndexTarget pointers which are never freed
Creating and passing around a bunch of pointers of IndexTargets (and of
a vector of pointers of IndexTargets) is probably done to avoid the
'costly' copy of container, but we are really not in a timecritical
operation here and move semantics will help us even further in the
future. On the other hand we never do a proper cleanup of these
pointers, which is very dirty, even if structures aren't that big…
The changes will effecting many items only effect our own hidden class,
so we can do that without fearing breaking interfaces or anything.
abstract the code to iterate over all targets a bit
We have two places in the code which need to iterate over targets and do
certain things with it. The first one is actually creating these targets
for download and the second instance pepares certain targets for
reading.
configureable acquire targets to download additional files
First pass at making the acquire system capable of downloading files
based on configuration rather than hardcoded entries. It is now possible
to instruct 'deb' and 'deb-src' sources.list lines to download more than
just Packages/Translation-* and Sources files. Details on how to do that
can be found in the included documentation file.
The code requires every index file we download to have a Package field,
but that doesn't hold true for all index we might want to download in
the future. Some might not even be deb822 formatted files…
The check was needed as apt used to accept unverifiable files like
Translation-*, but nowadays it requires hashes for these as well. Even
for unsigned repositories we interpret the Release file as binding now,
which means this check isn't triggerable expect for repositories which
do not have a Release file at all – something which is highly discouraged!
If we have a file on disk and the hashes are the same in the new Release
file and the old one we have on disk we know that if we ask the server
for the file, we will at best get an IMS hit – at worse the server
doesn't support this and sends us the (unchanged) file and we have to
run all our checks on it again for nothing. So, we can save ourselves
(and the servers) some unneeded requests if we figure this out on our
own.
Its a bit unclean to create an item just to let the item decide that it
can't do anything and let it fail, so instead we let the item creator
decide in all cases if patching should be attempted.
Also pulls a small trick to get the hashes for the current file without
calculating them by looking at the 'old' Release file if we have it.
At the moment we only have hashes for the uncompressed pdiff files, but
via the new '$HASH-Download' field in the .diff/Index hashes can be
provided for the .gz compressed pdiff file, which apt will pick up now
and use to verify the download. Now, we "just" need a buy in from the
creators of repositories…
The rred parser is very accepting regarding 'invalid' files. Given that
we can't trust the input it might be a bit too relaxed. In any case,
checking for more errors can't hurt given that we support only a very
specific subset of ed commands.
check patch hashes in rred worker instead of in the handler
rred is responsible for unpacking and reading the patch files in one go,
but we currently only have hashes for the uncompressed patch files, so
the handler read the entire patch file before dispatching it to the
worker which would read it again – both with an implicit uncompress.
Worse, while the workers operate in parallel the handler is the central
orchestration unit, so having it busy with work means the workers do
(potentially) nothing.
This means rred is working with 'untrusted' data, which is bad. Yet,
having the unpack in the handler meant that the untrusted uncompress was
done as root which isn't better either. Now, we have it at least
contained in a binary which we can harden a bit better. In the long run,
we want hashes for the compressed patch files through to be safe.
Having every item having its own code to verify the file(s) it handles
is an errorprune process and easy to break, especially if items move
through various stages (download, uncompress, patching, …). With a giant
rework we centralize (most of) the verification to have a better
enforcement rate and (hopefully) less chance for bugs, but it breaks the
ABI bigtime in exchange – and as we break it anyway, it is broken even
harder.
It shouldn't effect most frontends as they don't deal with the acquire
system at all or implement their own items, but some do and will need to
be patched (might be an opportunity to use apt on-board material).
The theory is simple: Items implement methods to decide if hashes need to
be checked (in this stage) and to return the expected hashes for this
item (in this stage). The verification itself is done in worker message
passing which has the benefit that a hashsum error is now a proper error
for the acquire system rather than a Done() which is later revised to a
Failed().
If we e.g. fail on hash verification for Packages.xz its highly unlikely
that it will be any better with Packages.gz, so we just waste download
bandwidth and time. It also causes us always to fallback to the
uncompressed Packages file for which the error will finally be reported,
which in turn confuses users as the file usually doesn't exist on the
mirrors, so a bug in apt is suspected for even trying it…
Helmut Grohne [Mon, 9 Mar 2015 17:11:10 +0000 (18:11 +0100)]
parse arch-qualified Provides correctly
The underlying problem is that libapt-pkg does not correctly parse these
provides. Internally, it creates a version named "baz:i386" with
architecture amd64. Of course, such a package name is invalid and thus
this version is completely inaccessible. Thus, this bug should not cause
apt to accept a broken situation as valid. Nevertheless, it prevents
using architecture qualified depends.
Michael Vogt [Fri, 22 May 2015 13:28:53 +0000 (15:28 +0200)]
Fix endless loop in apt-get update that can cause disk fillup
The apt http code parses Content-Length and Content-Range. For
both requests the variable "Size" is used and the semantic for
this Size is the total file size. However Content-Length is not
the entire file size for partital file requests. For servers that
send the Content-Range header first and then the Content-Length
header this can lead to globbing of Size so that its less than
the real file size. This may lead to a subsequent passing of a
negative number into the CircleBuf which leads to a endless
loop that writes data.
Thanks to Anton Blanchard for the analysis and initial patch.
treat older Release files than we already have as an IMSHit
Valid-Until protects us from long-living downgrade attacks, but not all
repositories have it and an attacker could still use older but still
valid files to downgrade us. While this makes it sounds like a security
improvement now, its a bit theoretical at best as an attacker with
capabilities to pull this off could just as well always keep us days
(but in the valid period) behind and always knows which state we have,
as we tell him with the If-Modified-Since header. This is also why this
is 'silently' ignored and treated as an IMSHit rather than screamed at
the user as this can at best be an annoyance for attackers.
An error here would 'regularily' be encountered by users by out-of-sync
mirrors serving a single run (e.g. load balancer) or in two consecutive
runs on the other hand, so it would just help teaching people ignore it.
That said, most of the code churn is caused by enforcing this additional
requirement. Crisscross from InRelease to Release.gpg is e.g. very
unlikely in practice, but if we would ignore it an attacker could
sidestep it this way.
detect Releasefile IMS hits even if the server doesn't
Not all servers we are talking to support If-Modified-Since and some are
not even sending Last-Modified for us, so in an effort to detect such
hits we run a hashsum check on the 'old' compared to the 'new' file, we
got the hashes for the 'new' already for "free" from the methods anyway
and hence just need to calculated the old ones.
This allows us to detect hits even with unsupported servers, which in
turn means we benefit from all the new hit behavior also here.
It isn't used much compared to what the methodname suggests, but in the
remaining uses it can't hurt to check more than strictly necessary by
calculating and verifying with all hashes we can compare with rather
than "just" the best known hash.
detect 416 complete file in partial by expected hash
If we have the expected hashes we can check with them if the file we
have in partial we got a 416 for is the expected file. We detected this
with same-size before, but not every server sends a good Content-Range
header with a 416 response.
rewrite all TFRewrite instances to use the new pkgTagSection::Write
While it is mostly busywork to rewrite all instances it actually fixes
bugs as the data storage used by the new method is std::string rather
than a char*, the later mostly created by c_str() from a std::string
which the caller has to ensure keeps in scope – something apt-ftparchive
actually didn't ensure and relied on copy-on-write behavior instead
which c++11 forbids and hence the new default gcc abi doesn't use it.
TFRewrite is okay, but it has obscure limitations (256 Tags), even more
obscure bugs (order for renames is defined by the old name) and the
interface is very c-style encouraging bad usage like we do it in
apt-ftparchive passing massive amounts of c_str() from std::string in.
The old-style is marked as deprecated accordingly. The next commit will
fix all places in the apt code to not use the old-style anymore.