test: use a multiarch capable dpkg rather than workaround
The tests require nowadays a (somewhat) multiarch-capable dpkg, so
replace the workaround as marked in the FIXME with a proper install as
the workaround isn't working always correctly, letting the test fail.
do not ++ on erased package pointers in autoremove
Symptom: In an Ubuntu precise chroot (like on travis-ci)
test-bug-613420-new-garbage-dependency segfaults in a std::set
operator++ on an iterator we have erased previously
(but not if run under gdb of course)
Clear() only clears a config option, not removing it and an empty
setting still exists. Hence we set the option instead to the xz path
so that the later existance check can find a binary for the test
With a bit of trickery we can reuse the usual infrastructure we have in
place to acquire deb files for the 'download' operation as well, which
gains us authentification check & display, error messages, correct
filenames and "downloads" from the root-owned archives.
refactor onError relabeling of DestFile as '.FAILED'
This helps ensure three things:
- each error is reported via ReportMirrorFailure
- if DestFile doesn't exist, do not attempt rename
- renames happen for every error
The last one wasn't the case for Size mismatches, which isn't nice, but
not a exploitable problem per-se as the file isn't picked up and remains
in partial/ where the following download-try will at most take it for a
partial request which fails the hashsum verification later on
We can't remove packages which are held back by the user with a hold, so
marking them (or its dependencies) as garbage will lead our autoremover
into madness – and given that the package is important enough that the
user has held it back it can't be garbage (at least at the moment), so
even if a front-end wants to use the info just for information display
its a good idea to not consider it garbage for them.
Servers might respond with a complete file either because they don't
support Ranges at all or the If-Range condition isn't statisfied, so we
have to parse the headers curl gets ourself to seek or truncate the file
we have so far.
This also finially adds the testcase testing a bunch of partial
situations for both, http and https - which is now all green.
As lengthy discussed in lp:1157943 partial https support was utterly
broken as a 206 response was handled as an (unhandled) error. This is
the first part of fixing it by supporting a 206 response and starting to
deal with 416.
No effective behavior change, just shuffling big junks of code between
methods and classes to split them into those strongly related to our
client implementation and those implementing HTTP.
The idea is to get HTTPS to a point in which most of the implementation
can be shared even though the client implementations itself is
completely different. This isn't anywhere near yet though, but it should
beenough to reuse at least a few lines from http in https now.
replace "filesize - 1" trick in http with proper 416 handling
Our http client requests the "filesize - 1" for the small edgecase of
handling a file which was completely downloaded, but not yet moved to
the correct place as we get 416 errors in that case, but as we can
handle 416 returns now we just special-case the situation of requesting
the exact filesize and handle it as a 200 without content instead.
If we get a 416 from the server it means the Range we asked for is above
the real filesize of the file on the server. Mostly this happens if the
server isn't supporting If-Range, but regardless of how we end up with
the partial data, the data is invalid so we discard it and retry with a
fresh plate and hope for the best.
Old behavior was to consider 416 an error and retry with a different
compression until we ran out of compression and requested the
uncompressed file (which doesn't exist on most mirrors) with an accept
line which server answered with "406 Not Acceptable".
Okay, maybe it does have a "few", but the DDTP issues mentioned in this
file are long since gone, so lets just drop the file and look at the PTS
instead: http://bugs.debian.org/cgi-bin/pkgreport.cgi?src=apt
While the InstallPackages code was moved from apt-get into the private
library the output was moved from (std::)cout to c1out which isn't shown
in quiet level 2 (and above), so we flip back to std::cout to ensure
that it is always printed as you are not going to use --print-uris if
you don't want to see the uris…
--allow-unauthenticated switches the download to a pre-0.6 system in
which a package can come from any source, rather than that trusted
packages can only come from trusted sources.
To allow this the flag used to set all packages as untrusted, which is a
bit much, so we check now if the package can be acquired via an
untrusted source and only if this is the case set it as untrusted.
As APT nowadays supports setting sources as trusted via a flag in the
sources.list this mode shouldn't be used that much anymore though.
The parser goes a bit to far by stripping :any from dependencies in a
single architecture environment. the flag "Multi-Arch: allowed" doesn't
care any architecture restrictions in that case (as in single arch
everything is native), but it still limits the possible versions
statisfying the dependency so stripping :any over-simplifies in upgrade
situations from "Multi-Arch: none" to "Multi-Arch: allowed".
The idea was not that bad, but doesn't make that much sense either
as this bit is set by the FileFd based on Actual as well, so this is
basically doing the same check again – with the difference that the
HitEof bit can still linger from a previous Read we did at the end of
the file, but have seek'd away from it now.
Combined with the length of entries, entry order and other not that
easily controllable conditions you can be 'lucky' enough to hit this
problem in a way which even visible (truncating of other fields might
not be visible easily, like 'Tags' and others).
don't truncate 100 char long paths in tar extraction
When a data.tar.{gz,xz} contains a path name that is exactly
100 characters long, it will get truncated to 99 chars upon
extraction in ExtractTar::Go().
Using all of the 100 available characters for the filename
seems to be new behaviour in gnu tar.
Closes: #689582
Thanks: Mika Eloranta for the testcase!
use FileFd in HashSum test to unbreak non-linux ports
The testcode happily mixes FILE* operations and direct access to fds
which is even a bit suprising that it works on linux and worked so
long for non-linux ports, so we switch to usage of FileFd instead
which provides us with simple fd-only operations. Its overkill for this
test as its a bare file and we ask for the descriptor all the time, but
it shouldn't hurt to implicitly test it a bit this way.
Compressing files in 4 different styles eats test-time for no practical
gain if we don't test them explicitly, so default to just building 'gz'
compressed files as it is the simplest compression algorithm supported
old-style dpkg foreign architecture adding for tests
Looks like the travis service runs on Ubuntu in a version which has dpkg
with an earlier interface implementation, so lets try if we can't make
the framework work with this dpkg version as well.
FileFd currently supports no fileflags which would make sense to provide
via mkostemp, so we can just use mkstemp here which is a standard
function compared to glib extension mkostemp.
O_CREAT (Create) and O_TRUNC (Empty) are implied by O_EXCL, which is the
mode mkstemp uses by default. The file description is opened ReadWrite,
but that used to be the default for FileFd in the old times and not a
problem as the difference is needed by FileFd to decide in which way the
compressor pipeline needs to be created (if any).
replace usage of potential dangerous mktemp with mkstemp
Avoid the warning "the use of `mktemp' is dangerous,
better use `mkstemp' or `mkdtemp'". It is not strictly necessary to
change the usage from a security point of view here, but mktemp is
also removed from the standard since POSIX.1-2008.
The mkostemp call returns a file descriptor the logic for
TemporaryFileName has been changed accordingly to get the same results.
The file permissions are corrected by using fchmod() as the default for
FileFd is 666 while mkstemp creates files with 600 by default.
apt-pkg:contrib Avoid compiler warning about sign-compare
The fix avoid the warning "comparison between signed and
unsigned integer expressions [-Wsign-compare]"· The index for the loop needs
to be unsigned for compare with globbuf.gl_pathc structure
member
This fix avoids the warning "comparison between signed and unsigned
integer expressions [-Wsign-compare]". The index for the loop
needs to be unsigned for compare with globbuf.gl_pathc structure
member.
add a breaks libapt-inst for FileFd changes in 0.9.9
Partial upgrades…
The fix for 704608 assumes that bf35c19b817cc1474b3deabce0b0953c248bad42
was applied to libapt-inst which isn't the case for partial upgrades of
course, so break it to ensure that it is the case.
allow Pre-Install-Pkgs hooks to get info over an FD != stdin
This adds ::InfoFD option alongside the ::Version one to request sending
the information to the specified FD, by default it is STDIN as it was
the case before.
The environment variable APT_HOOK_INFO_FD contains the FD the data is on as
a confirmation that the APT version used understood the request.
Allowing the hook to choose the FD is needed/helpful e.g. for shellscripts
which have a hard time accessing FDs above 9 (as >= 10 are usually used
internally by them)
We don't need initialized memory for pkgTagFile, but more to the point
we can use realloc this way which hides the bloody details of increasing
the size of the buffer used.
ensure that pkgTagFile isn't writing past Buffer length
In 91c4cc14d3654636edf997d23852f05ad3de4853 I removed the +256 from
the pkgTagFile call parsing Release files as I couldn't find a
mentioning of a reason for why and it was marked as XXX which suggested
that at least someone else was suspicious.
It turns out that it is indeed "documented", it just didn't found it at
first but the changelog of apt 0.6.6 (29. Dec 2003) mentions:
* Restore the ugly hack I removed from indexRecords::Load which set the
pkgTagFile buffer size to (file size)+256. This is concealing a bug,
but I can't fix it right now. This should fix the segfaults that
folks are seeing with 0.6.[45].
The bug it is "hiding" is that if pkgTagFile works with a file which doesn't
end in a double newline it will be adding it without checking if the Buffer
is big enough to store them. Its also not a good idea to let the End
pointer be past the end of our space, even if we don't access the data.
do not call make from libapt/run-tests if its called by make
If we are called by make everything is build already and
so this is just a noise nop we can just skip.
(Noisy as it complains about being unable to communicate with
the other makes to coordinate with the jobserver)
The debian-archive-keyring package ships trusted.gpg.d fragment files
for a while now and dropped their call to 'apt-key update', so there is
no need for use to call it as the keys will always be available.
This also finally allows a user to remove key(ring)s without APT to
overriding this decision by readding them with this step.
The functionality is kept around in the odd case that an old
debian-archive-keyring package is used which still calls 'apt-key
update' and depends on the import (hence, we also do not enforce a newer
version of the debian-archive-keyring via our dependencies)
let apt-key del work better with softlink and single key keyrings
Having fragement files means there is a good chance that there is one
key per keyring, so deal with that as well as with setups in which
keyrings are linked into trusted.gpg.d as we can't just modify those
files (they might be in /usr for example).