Add support for our GTest based unit tests. By default, CMake will
look in /usr/src/gtest for the external GTest project, but this can
be overriden by defining GTEST_ROOT when invoking cmake.
CMake: Translations: Avoid rebuilding .mo if .pot did not change
Use the witness/byproducts approach to build the translations. A
byproduct of a command is like an output, but may be older than the
input.
Here, we generate a normal template with headers in the normal way
as a witness (and for Launchpad translations), but we also generate
a .pot-tmp0 template file without a header that gets copied to a
.pot-tmp byproduct only if it changed. This way, the .pot-tmp is
only updated if an actual string translation changed. We also
create a custom target for the .pot file that we'll depend on
later in the overall target creating the mo files to ensure that
the template is build before we try to build mo files.
Then we make the msgmerge depend on the .pot-tmp instead of the .pot
file, which means that msgmerge and msgfmt only get re-run if a string
change occured.
CMake: Translations: Build apt-all.pot and update .po files
Merge all the per-domain templates into one template file using
msgcomm, stripping any line numbers in the input files, and sorting
the output per file.
This should create reasonably stable .pot and .po files that do not
change just because files move around. It should also be resilient
against some line changes, as long as one translated line is not
moved before/after another translated line.
CMake: Translations: Add support for shell scripts
Rework the arguments to apt_add_translation_domain so a user
can specify TARGETS and SCRIPTS, the latter being Shell scripts.
For each language (TARGETS being C++, SCRIPTS being Shell), a separate
template is generated via xgettext. Those templates are then merged
together by using msgcomm. In case there are no Shell scripts in
the translation domain, msgcomm will receive /dev/null instead of
a shell translation template.
This also reintroduces line numbers, as msgcomm would otherwise
re-order the merged files not only by filename, but also by message
string. It's unclear why it does that, it could just leave strings
within a file alone.
In contrast to the old build system, we use xgettext for shell scripts
instead of bash --dump-strings, as it's just easier to use the same
tool for everything. We also create valid headers.
First of all, instead of creating the files at configure time,
generate the files using normal target. This has the huge advantage
that they are rebuilt if their input changes. While we are at it,
also add dependencies on the vendor entity files.
This also fixes the path to the vendor script, which was given
relatively before, which obviously won't work when running from
inside a deeper subdirectory.
To speed things up, pass the --vendor option to getinfo, so we
do not have to find out the current vendor in getinfo all over
again.
CMake: Cache CURRENT_VENDOR and make it configurable
Cache the current vendor, so we do not have to rerun getinfo when
reconfiguring stuff. This also has the nice effect of making the
vendor configurable, so you can manually specify it on a platform
that might not have dpkg (not that building without dpkg works
yet).
This can be used to query a field for a specific vendor. It
also speeds up things a lot if we can cache the current vendor
in cmake and pass it to further getinfo invocations.
CMake: debian: Switch packaging over to CMake and dh 9
This new packaging is much easier to read, although the duplication
in the install files is a bit annoying. We should probably also get
rid of the movefiles for solvers, planners, and https method; but
then we have to keep track of which methods exist in the apt package.
Another disadvantage is that building only the documentation packages
also requires building the code, as there's no way to turn off code
building in the project.
This early support seems a bit hacky, but it's a hard switch: The
integration tests do not understand the old build system anymore
afterwards. I don't really like that.
CMake: Add initial support for documentation building
Build HTML docbook guides (untranslated) and manual pages
(including translations). Also install the examples in the
example subdirectory.
Translation of docbook guides has not been implemented yet,
but should be easy to do. The code also needs some cleanup
to automatically detect the available translations.
CMake: Add support for building and installing .mo files
Introduce support for building translation domain-specific
templates, merging them with the translations, and building
a language-specific .mo file.
The invocation of xgettext is done in the project source
directory, not in the current source directory, and all paths
are made relative to the project root, in order to have clean
templates.
This only supports the C++ source code for now, it unfortunately
does not handle the shell scripts of deselect yet.
prepare-release: Also search for libraries in CMake locations
With this change, the 'library' command looks for a library libX
in the directories: build/bin, */X, X.
This allows it to find the library when building with the
upcoming CMake backend, which places the libraries in a sub
directory of the build tree with the same name as the source
tree.
For example, if building in 'build/', the apt-pkg library
will be available at 'build/apt-pkg/libapt-pkg.so.5.0'.
In case there are multiple instances of a library,
the newest one will be used.
vendor/getinfo: Provide command to determine vendor to use
Introduce the 'current' command to eventually replace the current
symbolic link. The current command does roughly the same as the
makefile, the code has just been cleaned up a bit to work better
as a shell function.
Commit b559d4846018c3adac362c6f1d0d697956586208 updated the
documentation to refer to apt.systemd.daily instead of the
apt cron job, introducing fuzzy strings in all the translations.
apt-key: ignore any error produced by gpgconf --kill
gpgconf wasn't always equipped with a --kill option as highlighted by
our testcases failing on Travis and co as these use a much older version
of gpg2. As this is just for cleaning up slightly faster we ignore any
error a call might produce and carry on. Use a recent enough gpg2
version if you need the immediate killing…
apt-key has (usually) no secret key material so it doesn't really need
the agent at all, but newer gpgs insist on starting it anyhow. The
agents die off rather quickly after the underlying home-directory is
cleaned up, but that is still not fast enough for tools like sbuild
which want to unmount but can't as the agent is still hanging onto a
non-existent homedir.
edsp: try to read responses even if writing failed
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
if the FileFd failed already following calls should fail, too
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
Simulations are frequently run by unprivileged users which naturally
don't have the permissions to write to the default location for the eipp
file. Either way is bad as running in simulation mode doesn't mean we
don't want to run the logging (as EIPP runs the same regardless of
simulation or 'real' run), but showing the warnings is relatively
pointless in the default setup, so, in case we would produce errors and
perform a simulation we will discard the warnings and carry on.
Running apt with an external planner wouldn't have generated these
messages btw.
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the
assumption that the pipeline will always be at least one item short each
time it is called, but the logs in #832113 suggest that this isn't
always the case. I fail to see how at the moment, but the old
implementation had this behavior, so restoring it can't really hurt, can
it?
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…
APT doesn't know which packages will be triggered in the course of
actions, so it can't plan to see them for progress beforehand, but if it
sees that dpkg says that a package was triggered we can add additional
states. This is pretty much magic – after all it sets back the progress
– and there are cornercases in which this will result in incorrect
totals (package in partial states may or may not loose trigger states),
but the worst which can happen is that the progress is slightly
incorrect and doesn't reach 100%, but so be it. Better than being stuck
at 100% for a while as apt isn't realizing that a bunch of triggers
still need to be processed.
The progress reporting for a package sheduled for purging only included
the states dpkg passes through while actually purging the package – if
the package was fully installed before dpkg will pass first through all
remove states before purging it, so in the interest of consistent
reporting our progress reporting should do that, too.
create non-existent files in edit-sources with 644 instead of 640
If the sources file we want to edit doesn't exist yet GetLock will
create it with 640, which for a generic lockfile might be okay, but as
this is a sources file more relaxed permissions are in order – and
actually required as it wont be readable for unprivileged users causing
warnings/errors in apt calls.
report warnings&errors consistently in edit-sources
After editing the sources it is a good idea to (re)built the caches as
they will be out-of-date and doing so helps in reporting higherlevel
errors like duplicates sources.list entries, too, instead of just
general parsing errors as before.
The tests changes the sources.list and the modification time of this
file is considered while figuring out if the cache can be good. Usually
this isn't an issue, but in that case we have the cache generation
produce warnings which appear twice in this case.
clean up default-stanzas from extended_states on write
The existing cleanup was happening only for packages which had a status
change (install -> uninstalled) which is the most frequent but no the
only case – you can e.g. set autobits explicitly with apt-mark.
This would leave stanzas in the states file declaring a package to be
manually installed – which is the default value for a package not listed
at all, so we can just as well drop it from the file.
We support installing ./foo.deb (and ./foo.dsc for source) for a while
now, but it can be a bit clunky to work with those directly if you e.g.
build packages locally in a 'central' build-area.
The changes files also include hashsums and can be signed, so this can
also be considered an enhancement in terms of security as a user "just"
has to verify the signature on the changes file then rather than
checking all deb files individually in these manual installation
procedures.
allow arch=all to override No-Support-for-Architecture-all
If a user explicitly requests the download of arch:all apt shouldn't get
in the way and perform its detection dance if arch:all packages are
(also) in arch:any files or not.
This e.g. allows setting arch=all on a source with such a field (or one
which doesn't support all at all, but has the arch:all files like Debian
itself ATM) to get only the arch:all packages from there instead of
behaving like a no-op.
don't hardcode /var/lib/dpkg/status as dir::state::status
Theoretically it should be enough to change the Dir setting and have apt
pick the dpkg/status file from that. Also, it should be consistently
effected by RootDir. Both wasn't really the case through, so a user had
to explicitly set it too (or ignore it and have or not have expected
sideeffects caused by it).
This commit tries to guess better the location of the dpkg/status file
by setting dir::state::status to a naive "../dpkg/status", just that
this setting would be interpreted as relative to the CWD and not
relative to the dir::state directory. Also, the status file isn't really
relative to the state files apt has in /var/lib/apt/ as evident if we
consider that apt/ could be a symlink to someplace else and "../dpkg"
not effected by it, so what we do here is an explicit replace on apt/
– similar to how we create directories if it ends in apt/ – with dpkg/.
As this is a change it has the potential to cause regressions in so far
as the dpkg/status file of the "host" system is no longer used if you
set a "chroot" system via the Dir setting – but that tends to be
intended and causes people to painfully figure out that they had to set
this explicitly before, so that it now works more in terms of how the
other Dir settings work (aka "as expected"). If using the host status
file is really intended it is in fact easier to set this explicitely
compared to setting the new "magic" location explicitely.
Very unlikely, but if the parent is /dev/null, the child empty and the
grandchild a value we returned /dev/null/value which doesn't exist, so
hardly a problem, but for best operability we should be consistent in
our work and return /dev/null always.
tests: activate dpkg multi-arch even if test is single arch
Most tests are either multiarch, do not care for the specific
architecture or do not interact with dpkg, so really effect by this is
only test-external-installation-planner-protocol, but its a general
issue that while APT can be told to treat any architecture as native
dpkg has the native architecture hardcoded so if we run tests we must
make sure that dpkg knows about the architecture we will treat as
"native" in apt as otherwise dpkg will refuse to install packages from
such an architecture.
keep trying with next if connection to a SRV host failed
Instead of only trying the first host we get via SRV, we try them all as
we are supposed to and if that isn't working we try to connect to the
host itself as if we hadn't seen any SRV records. This was already the
intend of the old code, but it failed to hide earlier problems for the
next call, which would unconditionally fail then resulting in an all
around failure to connect. With proper stacking we can also keep the
error messages of each call around (and in the order tried) so if the
entire connection fails we can report all the things we have tried while
we discard the entire stack if something works out in the end.
report all instead of first error up the acquire chain
If we don't give a specific error to report up it is likely that all
error currently in the error stack are equally important, so reporting
just one could turn out to be confusing e.g. if name resolution failed
in a SRV record list.
don't change owner/perms/times through file:// symlinks
If we have files in partial/ from a previous invocation or similar such
those could be symlinks created by file:// sources. The code is
expecting only real files through and happily changes owner,
modification times and permission on the file the symlink points to
which tend to be files we have no business in touching in this way.
Permissions of symlinks shouldn't be changed, changing owner is usually
pointless to, but just to be sure we pick the easy way out and use
lchown, check for symlinks before chmod/utimes.
tests: disable EIPP logging in test-compressed-indexes
The test makes heavy use of disabling compression types which are
usually available some way or another like xz which is how the EIPP
logs are compressed by default. Instead of changing this test to change
the filename according to the compression we want to test we just
disable EIPP logging for this test as that is easier and has the same
practical effect.
give a descriptive error for pipe tries with 'false'
If libapt has builtin support for a compression type it will create a
dummy compressor struct with the Binary set to 'false' as it will catch
these before using the generic pipe implementation which uses the
Binary. The catching happens based on configured Names through, so you
can actually force apt to use the external binaries even if it would
usually use the builtin support. That logic fails through if you don't
happen to have these external binaries installed as it will fallback to
calling 'false', which will end in confusing 'Write error's.
So, this is again something you only encounter in constructed testing.
don't add default compressors two times if disabled
This is in so far pointless as the first match will deal with the
extension, so we don't actually ever use these second instances –
probably for the better as most need arguments to behave as epected &
more importantly: the point of the exercise disabling their use for
testing proposes.
use the right key for compressor configuration dump
The generated dump output is incorrect in sofar as it uses the name as
the key for this compressor, but they don't need to be equal as is the
case if you force some of the inbuilt ones to be disabled as our testing
framework does it at times.
This is hidden from changelog as nobody will actually notice while
describing it in a few words make it sound like an important change…
It can be handy to set apt options for the testcases which shouldn't be
accidentally committed like external planner testing or workarounds for
local setups.
use +0000 instead of UTC by default as timezone in output
All apt versions support numeric as well as 3-character timezones just
fine and its actually hard to write code which doesn't "accidently"
accepts it. So why change? Documenting the Date/Valid-Until fields in
the Release file is easy to do in terms of referencing the
datetime format used e.g. in the Debian changelogs (policy §4.4). This
format specifies only the numeric timezones through, not the nowadays
obsolete 3-character ones, so in the interest of least surprise we should
use the same format even through it carries a small risk of regression
in other clients (which encounter repositories created with
apt-ftparchive).
In case it is really regressing in practice, the hidden option
-o APT::FTPArchive::Release::NumericTimezone=0
can be used to go back to good old UTC as timezone.
The EDSP and EIPP protocols use this 'new' format, the text interface
used to communicate with the acquire methods does not for compatibility
reasons even if none of our methods would be effected and I doubt any
other would (in these instances the timezone is 'GMT' as that is what
HTTP/1.1 requires). Note that this is only true for apt talking to
methods, (libapt-based) methods talking to apt will respond with the
'new' format. It is therefore strongly adviced to support both also in
method input.
Debian isn't using 'update' anymore for years and the command is in
direct conflict with our goal of not requiring gnupg anymore, so it
is high time to officially declare this command as deprecated.
warn if apt-key is used in scripts/its output parsed
apt-key needs gnupg for most of its operations, but depending on it
isn't very efficient as apt-key is hardly used by users – and scripts
shouldn't use it to begin with as it is just a silly wrapper. To draw
more attention on the fact that e.g. 'apt-key add' should not be used in
favor of "just" dropping a keyring file into the trusted.gpg.d
directory this commit implements the display of warnings.
There is no real point in having two commands which roughly do the same
thing, especially if the difference is just in the display of the
fingerprint and hence security sensitive information.
As the volatile sources are parsed last they were sorted behind the
dpkg/status file and hence are treated as a downgrade, which isn't
really what you want to happen as from a user POV its an upgrade.
If we have a (e.g. locally built) deb file installed and do try to
install it again apt complained about this being a downgrade, but it
wasn't as it is the very same version… it was just confused into not
merging the versions together which looks like a downgrade then.
The same size assumption is usually good, but given that volatile files
are parsed last (even after the status file) the base assumption no
longer holds, but is easy to adept without actually changing anything in
practice.
protect only the latest same-source providers from autoremove
Traditionally all providers are protected providing something as apt
can't know which of them is actually really providing the functionality
for the user ensuring that we don't propose the removal of used stuff,
but that is of course also keeping stuff around which could be removed.
That can cause the collection of multiple old providers until the
provided package is itself no longer needed (e.g. out-of-tree kernel
modules). We combat this by marking providers only from the newest
source package version so that old providers built by older versions of
the same source package can be garbage collected.
more explicit MarkRequired algorithm code (part 2)
As the previous commit, this shouldn't change behavior at all, but
beside being more explicit and perhaps faster its also considerably
shorter (granted, mostly by if0-block elimination).
Piling everything in a single if statement always made my head wobble,
but it hasn't even a benefit as the most common case of a package which
isn't installed passes all of the old if and lands in the non-existent
else-part of the inner if. So beside a subjective cleanup of what goes
on this implementation should also be a bit faster.
write auto-bits before calling dpkg & again after if needed
Writing first means that even in the event of a power-failure the
autobit is saved for future processing instead of "forgotten" so that
the package is treated as manually installed.
In some cases we have to re-run the writing after dpkg is done through
as dpkg can let packages disappear and in such cases apt will move
autobits around (or in that case non-autobits) which we need to store.