don't truncate 100 char long paths in tar extraction
When a data.tar.{gz,xz} contains a path name that is exactly
100 characters long, it will get truncated to 99 chars upon
extraction in ExtractTar::Go().
Using all of the 100 available characters for the filename
seems to be new behaviour in gnu tar.
Closes: #689582
Thanks: Mika Eloranta for the testcase!
use FileFd in HashSum test to unbreak non-linux ports
The testcode happily mixes FILE* operations and direct access to fds
which is even a bit suprising that it works on linux and worked so
long for non-linux ports, so we switch to usage of FileFd instead
which provides us with simple fd-only operations. Its overkill for this
test as its a bare file and we ask for the descriptor all the time, but
it shouldn't hurt to implicitly test it a bit this way.
Compressing files in 4 different styles eats test-time for no practical
gain if we don't test them explicitly, so default to just building 'gz'
compressed files as it is the simplest compression algorithm supported
old-style dpkg foreign architecture adding for tests
Looks like the travis service runs on Ubuntu in a version which has dpkg
with an earlier interface implementation, so lets try if we can't make
the framework work with this dpkg version as well.
FileFd currently supports no fileflags which would make sense to provide
via mkostemp, so we can just use mkstemp here which is a standard
function compared to glib extension mkostemp.
O_CREAT (Create) and O_TRUNC (Empty) are implied by O_EXCL, which is the
mode mkstemp uses by default. The file description is opened ReadWrite,
but that used to be the default for FileFd in the old times and not a
problem as the difference is needed by FileFd to decide in which way the
compressor pipeline needs to be created (if any).
replace usage of potential dangerous mktemp with mkstemp
Avoid the warning "the use of `mktemp' is dangerous,
better use `mkstemp' or `mkdtemp'". It is not strictly necessary to
change the usage from a security point of view here, but mktemp is
also removed from the standard since POSIX.1-2008.
The mkostemp call returns a file descriptor the logic for
TemporaryFileName has been changed accordingly to get the same results.
The file permissions are corrected by using fchmod() as the default for
FileFd is 666 while mkstemp creates files with 600 by default.
apt-pkg:contrib Avoid compiler warning about sign-compare
The fix avoid the warning "comparison between signed and
unsigned integer expressions [-Wsign-compare]"· The index for the loop needs
to be unsigned for compare with globbuf.gl_pathc structure
member
This fix avoids the warning "comparison between signed and unsigned
integer expressions [-Wsign-compare]". The index for the loop
needs to be unsigned for compare with globbuf.gl_pathc structure
member.
add a breaks libapt-inst for FileFd changes in 0.9.9
Partial upgrades…
The fix for 704608 assumes that bf35c19b817cc1474b3deabce0b0953c248bad42
was applied to libapt-inst which isn't the case for partial upgrades of
course, so break it to ensure that it is the case.
allow Pre-Install-Pkgs hooks to get info over an FD != stdin
This adds ::InfoFD option alongside the ::Version one to request sending
the information to the specified FD, by default it is STDIN as it was
the case before.
The environment variable APT_HOOK_INFO_FD contains the FD the data is on as
a confirmation that the APT version used understood the request.
Allowing the hook to choose the FD is needed/helpful e.g. for shellscripts
which have a hard time accessing FDs above 9 (as >= 10 are usually used
internally by them)
We don't need initialized memory for pkgTagFile, but more to the point
we can use realloc this way which hides the bloody details of increasing
the size of the buffer used.
ensure that pkgTagFile isn't writing past Buffer length
In 91c4cc14d3654636edf997d23852f05ad3de4853 I removed the +256 from
the pkgTagFile call parsing Release files as I couldn't find a
mentioning of a reason for why and it was marked as XXX which suggested
that at least someone else was suspicious.
It turns out that it is indeed "documented", it just didn't found it at
first but the changelog of apt 0.6.6 (29. Dec 2003) mentions:
* Restore the ugly hack I removed from indexRecords::Load which set the
pkgTagFile buffer size to (file size)+256. This is concealing a bug,
but I can't fix it right now. This should fix the segfaults that
folks are seeing with 0.6.[45].
The bug it is "hiding" is that if pkgTagFile works with a file which doesn't
end in a double newline it will be adding it without checking if the Buffer
is big enough to store them. Its also not a good idea to let the End
pointer be past the end of our space, even if we don't access the data.
do not call make from libapt/run-tests if its called by make
If we are called by make everything is build already and
so this is just a noise nop we can just skip.
(Noisy as it complains about being unable to communicate with
the other makes to coordinate with the jobserver)
The debian-archive-keyring package ships trusted.gpg.d fragment files
for a while now and dropped their call to 'apt-key update', so there is
no need for use to call it as the keys will always be available.
This also finally allows a user to remove key(ring)s without APT to
overriding this decision by readding them with this step.
The functionality is kept around in the odd case that an old
debian-archive-keyring package is used which still calls 'apt-key
update' and depends on the import (hence, we also do not enforce a newer
version of the debian-archive-keyring via our dependencies)
let apt-key del work better with softlink and single key keyrings
Having fragement files means there is a good chance that there is one
key per keyring, so deal with that as well as with setups in which
keyrings are linked into trusted.gpg.d as we can't just modify those
files (they might be in /usr for example).
for some "interesting" reason gpg decides that it needs to update its
trustdb.gpg file in a --list-keys command even if right before gpg is
asked to --check-trustdb. That wouldn't be as bad if it wouldn't modify
the keyring being listed at that moment as well, which generates not
only warnings which are not a problem for us, but as the keyring
modified can be in /usr it modified files which aren't allowed to be
modified.
The suggested solution in the bugreport is running --check-trustdb
unconditionally in an 'apt-key update' call, but this command will not
be used in the future and this could still potentially bite us in
net-update or adv calls. All of this just to keep a file around, which
we do not need…
The commit therefore switches to the use of a temporary created
trusted.gpg file for everyone and asks gpg to not try to update the
trustdb after its intial creation, which seems to avoid the problem
altogether.
It is using your also faked secring btw as calling the check-trustdb
without a keyring is a lot slower …
Closes: #687611
Thanks: Andreas Beckmann for the initial patch!
APT doesn't care for the trustdb.gpg, but gnupg requires one even for
the simplest commands, so we either use the one root has available in
/etc or if we don't have access to it (as only root can read that file)
we create a temporary directory to store a trustdb.gpg in it.
We can't create just a temporary file as gpg requires the given
trustdb.gpg file to be valid (if it exists), so we would have to remove
the file before calling gnupg which would allow mktemp (and co) to hand
exactly this filename out to another program (unlikely, but still).
Usually, most apt-key commands require root, so the script is checking
for being run as root, but in your tests we use a non-root location, so
we don't need to be root and therefore need an option to skip the check.
While we don't want these error messages on our usual stack, we can use
our usual infrastructure to generate an error message with all the usual
bells like errno and strerror attached.
If this code is run as non-root we are in a special situation (e.g. in
our testcases) where it is obvious that we can't enforce user/group on
any file, so skip this code altogether instead of bugging users with
an error message – which we also switch to a warning as a failure to
open the file is "just" a warning, so the 'wrong' owner shouldn't be
that much of an issue.
The file is still handled with chmod, so all the security we can enforce
is still enforced of course, which also gets a warning if it fails.
The constructors of our (clear)sign-acquire-items move a pre-existent
file for error-recovery away, which gets restored or discarded later as
the acquire progresses, but --print-uris never really starts the
acquire process, so the files aren't restored (as they should).
To fix this both get a destructor which checks for signs of acquire
doing anything and if it hasn't the file is restored.
Note that these virtual destructors theoretically break the API, but
only with classes extending the sign-acquire-items and nobody does this,
as it would be insane for library users to fiddle with Acquire
internals – and these classes are internals.
For many commands the output isn't stable (like then dpkg is called) but
the exitcode is, so this helper enhances the common && msgpass ||
msgfail by generating automatically a msgtest and showing the output of
the command in case of failure instead of discarding it unconditionally,
the later being chronic-like behaviour
Signing files with expired keys is not as easy as it sounds, so the
framework jumps a few loops to do it, but it might come in handy to have
an expired key around for later tests even if it is not that different
from having no key in regards to APT behaviour.
The handwritten parsing here was mostly done as we couldn't trust the
Release file we got, but nowadays we are sure that the Release file is
valid and contains just a single section we want it to include.
Beside reducing code it also fixes a bug: Fieldnames in deb822 formatted
files are case-insensitive and pkgTagFile does it correctly, but this
selfbuilt stuff here didn't.
The file we read will always be a Release file as the clearsign is
stripped earlier in this method, so this check is just wasting CPU
Its also removing the risk that this could ever be part of a valid
section, even if I can't imagine how that should be valid.
We start your quest by using the version of a package applying to a
specific pin, but that version could very well be below the current
version, which causes APT to suggest a downgrade even if it is
advertised that it never does this below 1000.
Its of course questionable what use a specific pin on a package has
which has a newer version already installed, but reacting with the
suggestion of a downgrade is really not appropriated (even if its kinda
likely that this is actually the intend the user has – it could just as
well be an outdated pin) and as pinning is complicated enough we should
atleast do what is described in the manpage.
So we look out for the specific pin and if we haven't seen it at the
moment we see the installed version, we ignore the specific pin.