test: Always install dpkg into our tests, regardless of MA
Even if we only configure a single architecture, install dpkg, so
dpkg can assert multi arch correctly. This also has the nice side
effect of making single architecture and multiple architecture
test cases more uniform.
edsp: try 2 to read responses even if writing failed
Commit b60c8a89c281f2bb945d426d2215cbf8f5760738 improved the situation,
but due to inconsistency mostly for planners, not for solvers. As the
idea of hiding errors if we show another error is a bit scary (as the
extern error might be a followup of our intern error, rather than the
reason for our intern error as it is at the moment) we don't discard the
errors, but if we got an extern error we show them directly removing
them from the error list at the end of the run – that list will contain
the extern error which hopefully gives us the best of both worlds.
The problem itself is the same as before: The externals exiting before
apt is done talking to them.
Commit 3af3ac2f5ec007badeded46a94be2bd06b9917a2 (released in 1.3~pre1)
implements proper fallback for SRV, but that works actually too good
as the RFC defines that such an SRV record should indicate that the
server doesn't provide this service and apt should respect this.
The solution is hence to fail again as requested even if that isn't what
the user (and perhaps even the server admins) wanted. At least we will
print a message now explicitly mentioning SRV to point people in the
right direction.
acquire: Use priority queues and a 3 stage pipeline design
Employ a priority queue instead of a normal queue to hold
the items; and only add items to the running pipeline if
their priority is the same or higher than the priority
of items in the queue.
The priorities are designed for a 3 stage pipeline system:
In stage 1, all Release files and .diff/Index files are fetched. This
allows us to determine what files remain to be fetched, and thus
ensures a usable progress reporting.
In stage 2, all Pdiff patches are fetched, so we can apply them
in parallel with fetching other files in stage 3.
In stage 3, all other files are fetched (complete index files
such as Contents, Packages).
Performance improvements, mainly from fetching the pdiff patches
before complete files, so they can be applied in parallel:
For the 01 Sep 2016 03:35:23 UTC -> 02 Sep 2016 09:25:37 update
of Debian unstable and testing with Contents and appstream for
amd64 and i386, update time reduced from 37 seconds to 24-28
seconds.
Previously, apt would first download new DEP11 icon tarballs
and metadata files, causing the CPU to be idle. By fetching
the diffs in stage 2, we can now patch our contents and Packages
files while we are downloading the DEP11 stuff.
CMake: test/libapt: Use a prebuilt GTest library if available
If a non-existing source directory is specified, try finding
the system gtest library. Debian derived distributions are
a bit strange because they only ship the source code and
not the library...
support long keyid and fingerprint in gpgv's GOODSIG
In gpgv1 GOODSIG (and the other messages of status-fd) are documented as
sending the long keyid. In gpgv2 it is documented to be either long
keyid or the fingerprint. At the moment it is still the long keyid, but
the documentation hints at the possibility of changing this.
We care about this for Signed-By support as we detect this way if the
right fingerprint has signed this file (or not). The check itself is
done via VALIDSIG which always is a fingerprint, but there must also be
a GOODSIG (as expired sigs are valid, too) found to be accepted which
wouldn't be found in the fingerprint-case and the signature hence
refused.
try not to call memcpy with length 0 in hash calculations
memcpy is marked as nonnull for its input, but ignores the input anyhow
if the declared length is zero. Our SHA2 implementations do this as
well, it was "just" MD5 and SHA1 missing, so we add the length check
here as well as along the callstack as it is really pointless to do all
these method calls for "nothing".
gpg annoyingly changed its output and broke our test suite
again by adding two extra lines about key type and issuer.
Really annoying.
Those lines also have more than one space after the colon,
so let's use \s* there - and also change the other lines to
support variable length whitespace in case gpg decides to
break things there too.
If the inner Base256ToNum() returned false, it did not set
Num to a new value, causing it to be uninitialized, and thus
might have caused the function to exit despite a good result.
Also document why the Res = Num, if (Res != Num) magic is done.
TagFile: Fix off-by-one errors in comment stripping
Adding 1 to the value of d->End - current makes restLength one byte
too long: If we pass memchr(current, ..., restLength) has thus
undefined behavior.
Also, reading the value of current has undefined behavior if
current >= d->End, not only for current > d->End:
Consider a string of length 1, that is d->End = d->Current + 1.
We can only read at d->Current + 0, but d->Current + 1 is beyond
the end of the string.
This probably caused several inexplicable build failures on hurd-i386
in the past, and just now caused a build failure on Ubuntu's amd64
builder.
Fix segfault and out-of-bounds read in Binary fields
If a Binary field contains one or more spaces before a comma, the
code produced a segmentation fault, as it accidentally set a pointer
to 0 instead of the value of the pointer.
If the comma is at the beginning of the field, the code would
create a binStartNext that points one element before the start
of the string, which is undefined behavior.
We also need to check that we do not exit the string during the
replacement of spaces before commas: A string of the form " ,"
would normally exit the boundary of the Buffer:
Introduce a new -qq mode for our integration test framework,
and make travis use it.
The new -qq mode sets MSGLEVEL to 1. In MSGLEVEL=1, no messages
are generated for passed tests, and all testcase filenames are
printed in the same line.
Also install first in travis, do not ls the installed output
and run the install with chronic, so we only get output if it
failed.
don't loop on pinning pkgs from absolute debs by regex
An absolute filename for a *.deb file starts with a /. A package with
the name of the file is inserted in the cache which is provided by the
"real" package for internal reasons. The pinning code detects a regex
based wildcard by having the regex start with /. That is no problem
as a / can not be included in a package name… expect that our virtual
filename package can and does.
We fix this two ways actually: First, a regex is only being considered a
regex if it also ends with / (we don't support flags). That stops our
problem with the virtual filename packages already, but to be sure we
also do not enter the loop if matcher and package name are equal.
It has to be noted that the creation of pins for virtual packages like
the here effected filename packages is pointless as only versions can be
pinned, but checking that a package is really purely virtual is too
costly compared to just creating an unused pin.
Without randomizing the order in which we download the index files we
leak needlessly information to the mirrors of which architecture is
native or foreign on this system. More importantly, we leak the order in
which description translations will be used which in most cases will e.g.
have the native tongue first.
Note that the leak effect in practice is limited as apt detects if a file
it wants to download is already available in the latest version from a
previous download and does not query the server in such cases. Combined
with the fact that Translation files are usually updated infrequently
and not all at the same time, so a mirror can never be sure if it got asked
about all files the user wants.
FreeBSD has two iconv systems: It ships an iconv.h itself,
and symbols for that in the libc. But there's also the port
of GNU libiconv, which unfortunately for us, Doxygen depends
on.
This changes things to prefer a separate libiconv library
over the system one; that is, the port on FreeBSD.
The host system might not have a dpkg installed, which makes
dpkg fail with:
dpkg not recorded as installed, cannot check for multi-arch support!
That's entirely useless of course. We want to know if dpkg could
support multi-arch in our chroot, so we pseudo-install dpkg into
the chroot and pretend it's version is one version higher than
the minimum dpkg version, so dpkg --assert-multi-arch works on
recent dpkgs.
test: Use printf "%b\n" instead of echo for strings with '\'
Use of echo with special characters is not portable. On a normal
POSIX system, the behavior with backslash escaped strings is
implementation-defined. On an XSI-conformant system, they must
be interpreted.
A way out is the printf command - printf "%b" specifies that
the following argument is to be printed with backslash escapes
interpreted.
Use /dev/fd in test-bug-712116-dpkg-pre-install-pkgs-hook-multiarch,
skip test-no-fds-leaked-to-maintainer-scripts (it is not guaranteed
that /dev/fd contains all file descriptors), and avoid the unneeded
use of /proc/fd in another test case.
This allows other vendors to use different paths, or to build
your own APT in /opt for testing. Note that this uses + 1 in
some places, as the paths we receive are absolute, but we need
to strip of the initial /.
Use C locale instead of C.UTF-8 for protocol strings
The C.UTF-8 locale is not portable, so we need to use C, otherwise
we crash on other systems. We can use std::locale::classic() for
that, which might also be a bit cheaper than using locale("C").
This basically just links everything to libintl if USE_NLS is
on. It would be better to just link those targets that are
actually translated, but doing so is a huge PITA.
Also move the include_directories() for the build-tree include/
directory to the top of the CMakeLists.txt, otherwise it only
gets passed after Intl_INCLUDE_DIRS, which means we will built
against installed apt-pkg headers (if any) instead of our own.
The BSD systems still ship their own db.h with a historical
BSD implementation, which is preferred by CMake, as it searches
its default path first. We thus have to disable the DEFAULT_PATH
for the search, unfortunately. We also need to pass the correct
include directory to the target.
Furthermore, on FreeBSD the library is called db-<VERSION>, so
let's add db-5 to the list of allowed names.
Ubuntu uses *.ddeb files for their debug packages, but the interface we
are using since f495992428a396e0f98886c9a761a804aa161c68 to talk to dpkg
isn't supporting *.ddeb files. This used to work previously as apt itself
isn't caring about the filenames at all and if they are explicitly
mentioned dpkg will accept all, too.
It might or might not be a good idea to patch dpkg, too, but regardless
of it happening, we don't want to couple us to closely to dpkg for this
minor feature but testing for this at runtime as it would delay shipping
the fix for the too long commandlines further.
It is also questionable if it is really a good idea to allow any file
extension to be used here (like .foobar in the testcase), but we used to
and we tend to avoid breaking existing usecases if we can help it.
As a bonus, this also allows the installation of ddeb files directly
from the commandline as you can with deb files already. We continue to
ignore udeb through as the user-mistake to useful ratio is too high.
In 105503b4b470c124bc0c271bd8a50e25ecbe9133 we got a warning implemented
for unreadable files which greatly improves the behavior of apt update
already as everything will work as long as we don't need the keys
included in these files. The behavior if they are needed is still
strange through as update will fail claiming missing keys and a manual
test (which the user will likely perform as root) will be successful.
Passing the new warning generated by apt-key through to apt is a bit
strange from an interface point of view, but basically duplicating the
warning code in multiple places doesn't feel right either. That means we
have no translation for the message through as apt-key has no i18n yet.
It also means that if the user has a bunch of sources each of them will
generate a warning for each unreadable file which could result in quite
a few duplicated warnings, but "too many" is better than none.
apt-key: warn instead of fail on unreadable keyrings
apt-key has inconsistent behaviour if it can't read a keyring file:
Commands like 'list' skipped silently over such keyrings while 'verify'
failed hard resulting in apt to report cconfusing gpg errors (#834973).
As a first step we teach apt-key to be more consistent here skipping in
all commands over unreadable keyrings, but issuing a warning in the
process, which is as usual for apt commands displayed at the end of the
run.
do not restore selections for already purged packages
In most cases apt was already skipping the (re)setting of packages as
to be removed/purged if dpkg had told us that it already did, but we
haven't dealt with it in the most obvious of the cases: Selections set
for packages we touched in this operation which either restores
selections even dpkg would have overridden or e.g. tries to restore a
purge selection for a package which was just purged – does not happen
with apt itself as it isn't using selections in this way, but higher
frontends like aptitude do.
The result in the later case is a warning printed by dpkg that we try to
set selections for an unknown package, which is harmless per se, but can
be confusing for users and we really shouldn't cause warnings in dpkg if
we can help it.
The bugreport shows a segfault caused by the code not doing the correct
magical dance to remove an item from inside a queue in all cases. We
could try hard to fix this, but it is actually better and also easier to
perform these checks (which cause instant failure) earlier so that they
haven't entered queue(s) yet, which in return makes cleanup trivial.
The result is that we actually end up failing "too early" as if we
wouldn't be careful download errors would be logged before that process
was even started. Not a problem for the acquire system, but likely to
confuse users and programs alike if they see the download process
producing errors before apt was technically allowed to do an acquire
(it didn't, so no violation, but it looks like it to the untrained eye).
Previously, we would have generated all the translations, but not
turn them on in the code. Instead, move the Translation crap into
po/ and disable po/ alltogether if USE_NLS if OFF.
This module should cover all sorts of large file supports, as long
as they either support the getconf LFS_CFLAGS command; or the
_FILE_OFFSET_BITS=64 or _LARGE_FILES macros.
do dpkg --configure before --remove/--purge --pending
Commit 7ec343309b7bc6001b465c870609b3c570026149 got us most of the way,
but the last mile was botched by having the pending calls in the wrong
order as this way we potentially 'force' dpkg to remove/purge a package
it doesn't want to as another package still depends on it and the
replacement isn't fully installed yet.
So what we do now is a configure before remove and purge (all with
--no-triggers) and finishing off with another configure pending call to
take care of the triggers.
Note that in the bugreport example our current planner is forcing dpkg
to remove the package earlier via --force-depends which we could do for
the pending calls as well and could be used as a workaround, but we want
to do less forcing eventually.
CMake: Translations: Build byproduct before output
This can lead to an inconsistent state otherwise, with the
output being updated and the byproduct not; for example,
when the build was manually interrupted.
debhelper 10 is much nicer with the installation part from
a dirty tree, so you can just fix some stuff breaking the
install step and then continue building with debuild -b -nc
until you have fixed all your stuff.
It also has some other advantages, of course, like some
bug fixes in shell escaping for maintscript, or systemd
helper changes.
debian: Make better use of the tree installed by CMake
This gets rid of the special casing of etc/apt, various
example file installations handled by the upstream build
system, and of course the directory creation for all dirs
created by the upstream build system.
tests/control: Handle the gpg1/gpg2 mess a bit better
Hardcoding gpgv1 and gnupg1 breaks Ubuntu, because on Ubuntu,
these packages do not exist yet. Instead allow gnupg (<< 2)
for gnupg1 and gnupg2 for gnupg (>= 2), so we cover all
potential combinations.
prepare-release: Use equivs and gdebi-core for travis deps
Our previous hack did not really support or groups and other
more advanced dependency types. This hack basically removes
build profiles, and the @-type depends for tests, and otherwise
converts the deps to a package which is then installed via
gdebi.
Instead of erroring out when receiving a SIGINT, let the
child deal with it - we'll error out anyway if the child
exits with an error or due to the signal. Also ignore
SIGQUIT, as system() ignores it.
This basically fixes Bug #832593, but: we are running
the hooks via sh -c. Some shells exit with a signal
error even if the command they are executing catches
the signal and exits successfully. So far, this has
been noticed on dash, which unfortunately, is our
default shell.
Example:
$ cat trap.sh
trap 'echo int' INT; sleep 10; exit 0
$ if dash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
FAIL: 130
$ if mksh -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
$ if bash -c ./trap.sh; then echo OK: $?; else echo FAIL: $?; fi
^Cint
OK: 0
CMake: Translations: Don't rebuild .mo for line number changes
If only the line numbers changed in a file without any of the
translatable strings changing, the .po and .mo files were
rebuilt, making building simple code changes somewhat annoying.
We can work around this by passing --add-location=file to msgcomm
when we are creating the temporary .pot file used for building
the .mo files.