With cmake using BUILDDIRECTORY at this place is not only as wrong as it
was before, but it might not even work always copying the system
provided one which might or might not be current and hence fails tests
needing it to be current like ./test-apt-move-and-forget-manual-sections
We don't want to always use the one from the source directory through
either like in autopkgtests.
http: auto-configure for local Tor proxy if called as 'tor'
With apts http transport supporting socks5h proxies and all the work
in terms of configuration of methods based on the name it is called with
it becomes surprisingly easy to implement Tor support equally (and
perhaps even a bit exceeding) what is available currently in
apt-transport-tor.
How this will turn out to be handled packaging wise we will see in
https://lists.debian.org/deity/2016/08/msg00012.html , but until this is
resolved we can add the needed support without actively enabling it for
now, so that this can be tested better.
block direct connections to .onion domains (RFC7687)
Doing a direct connect to an .onion address (if you don't happen to use
it as a local domain, which you shouldn't) is bound to fail and does
leak the information that you do use Tor and which hidden service you
wanted to connect to to a DNS server. Worse, if the DNS is poisoned and
actually resolves tricking a user into believing the setup would work
correctly…
This does block also the usage of wrappers like torsocks with apt, but
with native support available and advertised in the error message this
shouldn't really be an issue.
disable explicit configuration of all packages at the end
With b4450f1dd6bca537e60406b2383ab154a3e1485f we dropped what we
calculated here later on and now that we don't need it in the meantime
either we can just skip the busy work by default and expect dpkg to do
the right thing dropping also our little "last explicit configures"
removal trick introduced in b4450f1dd6bca537e60406b2383ab154a3e1485f.
This enables the last of a bunch of previously experimental options,
some of them existing still, but are very special and hence not really
worth documenting anymore (especially as it would need to be rewritten
now entirely) which is why the documentation is nearly completely
dropped.
The order of configuration stanzas in the simulation code changes
slightly as it isn't concerning itself with finding the 'right' order,
but any order is valid anyhow as long as the entire set happens in the
same call.
If a planner lets actions to be figured out by dpkg in pending calls
these actions aren't mentioned in a simulation. While that might be
a good thing for debugging, it would be a change in behavior and
especially if a planner avoids explicit removals could be confusing for
users. As such we perform the same 'trick' as in the dpkg implementation
by performing explicitly what would be done by the pending calls.
To save us some work and avoid desyncs we perform a layer violation by
using deb/ code in the generic simulation – and further we perform ugly
dynamic_cast to avoid breaking the ABI for nothing; aptitude is the only
other user of the simulation class according to codesearch.d.n and for
that our little trick works. It just isn't working if you happen to
extend pkgSimulate or otherwise manage to call the protected Go methods
directly – which isn't very realistic/practical.
The user has to approve the removal of a crossgraded package as it might
be needed to remove it (temporarily) in the process, but in most cases
we can happily avoid it and let dpkg unpack over it skipping the
remove. This has some effects on progress reporting and how deal with
selections through which makes this a tiny bit complicated.
allow methods to be disabled and redirected via config
To prevent accidents like adding http-sources while using tor+http it
can make sense to allow disabling methods. It might even make sense to
allow "redirections" and adding "symlinked" methods via configuration.
This could e.g. allow using different options for certain sources by
adding and configuring a "virtual" new method which picks up the config
based on the name it was called with like e.g. http does if called as
tor+http.
Socks support is a requested feature in sofar that the internet is
actually believing Acquire::socks::Proxy would exist. It doesn't and
this commit isn't adding it as that isn't how our configuration works,
but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was
changed already to support socks proxies (all versions) via curl. This
commit implements only SOCKS5 (RFC1928) with no auth or pass&user auth
(RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the
protocol name further indicates that DNS resolution is delegated to the
socks proxy rather than performed locally.
The implementation works and was tested with Tor as socks proxy for
which implementing socks5h only can actually be considered a feature.
The https method implemented for a long while now a hardcoded fallback
to the same options in http, which, while it works, is rather inflexible
if we want to allow the methods to use another name to change their
behavior slightly, like apt-transport-tor does to https – most of the
diff being s#https#tor#g which then fails to do the full circle
fallthrough tor -> https -> http for https sources. With this config
infrastructure this could be implemented now.
use the same redirection handling for http and https
cURL which backs our https implementation can handle redirects on its
own, but by dealing with them on our own we gain finer control over which
redirections will be performed (we don't like https → http) and by whom
so that redirections to other hosts correctly spawn a new https method
dealing with these instead of letting the current one deal with it.
detect redirection loops in acquire instead of workers
Having the detection handled in specific (http) workers means that a
redirection loop over different hostnames isn't detected. Its also not a
good idea have this implement in each method independently even if it
would work
suggest transport-packages based on established namescheme
apt-transports not shipped in apt directly are usually named
apt-transport-% with % being what is in the name of the transport.
tor additional introduced aliases via %+something, which isn't a bad
idea, so be strip the +something part from the method name before
suggesting the installation of an apt-transport-% package.
This avoids us the maintainance of a list of existing transports
creating a two class system of known and unknown transports which would
be quite arbitrary and is unfriendly to backports.
ensure all configures are reported to hook scripts
A planner might not explicitly configure all packages, but we need to
know all packages which will be configured for progress reporting and to
tell the hook scripts about them as they rely on this for their own
functionality.
don't purge directly, but remove and do purge at the end
If we want a package to be purged from the system tell dpkg in the
ordering (if it has to touch it explicitly) to remove it and cover the
purging of the config files at the end with a --purge --pending call.
That should help packages move conffiles around between packages
correctly even if the user is purging packages directly in big actions
like dist-upgrades involving many packages.
Implemented a long while ago now with relatively good progress reporting
involving triggers is a good time to try delaying the execution of
triggers across dpkg invocations finally by default.
Note: The bugreport talks also about 'smarter' configuration which is a
much bigger part and approached from multiple directions, but doesn't
really involve triggers per-se so considering it decoupled should help
in getting it done…
Telling dpkg early on that we are going to remove these packages later
helps it with auto-deconfiguration decisions and its another area where
a planner can ignore the nitty gritty details and let dpkg decide the
course of action if there are no special requirements.
save and restore selection states before/after calling dpkg
dpkg decides certain things on its own based on selections and
especially if we want to call --pending on purge/remove actions, we need
to ensure a clean slate or otherwise we surprise the user by removing
packages we weren't allowed to remove by the user in this run (the
selection might be an overarching plan for the not-yet "future").
Ideally dpkg would have some kind of temporal selection interface for
this case, but it hasn't, so we make it temporal with the risk of
loosing state if we don't manage to restore them.
use dpkg --unpack --recursive to avoid long cmdlines
Having long commandlines split into two is a huge problem if it happens
and additionally if we want to introduce planners which perform less
micromanagment its a good idea to leave the details for dpkg to decide.
In practice this doesn't work yet unconditionally as a bug is hiding in
the ordering code of dpkg, but it works if apt imposes its ordering so
this commit allows for now at least to solve the first problem.
pass --force-remove-essential to dpkg only if needed
APT (usually) knows which package is essential or not, so we can avoid
passing this force flag to dpkg unconditionally if the user hasn't
chosen a non-default essential handling obscuring the information.
prepare-release: Switch over to CMake, set version in CMakeLists.txt
Teach the prepare-release script about the version new locations
and also set the version in CMakeLists, as that is better than
reading it from the changelog: CMake would not rerun automatically
otherwise if the version changed.
CMake: Rewrite existing Documentation support and add doxygen
This can now build all documentation. It should also be fairly
reusable for other projects, as long as they follow the same
naming scheme for the po4a output files and set the PACKAGE_*
variables used here.
We could have done all translations in a single call to po4a
like the makefile based buildsystem does. While that would
have made the output slightly nicer, this solution offers a
huge performance gain because it can translate the documents
in parallel, which also means that the xsltproc stage does not
have to wait for all translations to be done first.
You might think that the add_custom_command() should list the
actual output files as BYPRODUCTS. This is not true however:
Because the files are not always generated, Ninja will think
missing byproducts mean that the target is out of date - which
is not what we want.
Finally, also add the missing doxygen support. Note that the
packaging script cleans up some md5 and map files created by
doxygen, otherwise it is fairly boring.
This was dropped in autotools as I found no use of the HAVE_PTSNAME_R
macro. Turns out it was typoed as HAVE_PTS_NAME_R. Fix the #ifdef and
add checks to CMake for it.
Add support for our GTest based unit tests. By default, CMake will
look in /usr/src/gtest for the external GTest project, but this can
be overriden by defining GTEST_ROOT when invoking cmake.
CMake: Translations: Avoid rebuilding .mo if .pot did not change
Use the witness/byproducts approach to build the translations. A
byproduct of a command is like an output, but may be older than the
input.
Here, we generate a normal template with headers in the normal way
as a witness (and for Launchpad translations), but we also generate
a .pot-tmp0 template file without a header that gets copied to a
.pot-tmp byproduct only if it changed. This way, the .pot-tmp is
only updated if an actual string translation changed. We also
create a custom target for the .pot file that we'll depend on
later in the overall target creating the mo files to ensure that
the template is build before we try to build mo files.
Then we make the msgmerge depend on the .pot-tmp instead of the .pot
file, which means that msgmerge and msgfmt only get re-run if a string
change occured.
CMake: Translations: Build apt-all.pot and update .po files
Merge all the per-domain templates into one template file using
msgcomm, stripping any line numbers in the input files, and sorting
the output per file.
This should create reasonably stable .pot and .po files that do not
change just because files move around. It should also be resilient
against some line changes, as long as one translated line is not
moved before/after another translated line.
CMake: Translations: Add support for shell scripts
Rework the arguments to apt_add_translation_domain so a user
can specify TARGETS and SCRIPTS, the latter being Shell scripts.
For each language (TARGETS being C++, SCRIPTS being Shell), a separate
template is generated via xgettext. Those templates are then merged
together by using msgcomm. In case there are no Shell scripts in
the translation domain, msgcomm will receive /dev/null instead of
a shell translation template.
This also reintroduces line numbers, as msgcomm would otherwise
re-order the merged files not only by filename, but also by message
string. It's unclear why it does that, it could just leave strings
within a file alone.
In contrast to the old build system, we use xgettext for shell scripts
instead of bash --dump-strings, as it's just easier to use the same
tool for everything. We also create valid headers.
First of all, instead of creating the files at configure time,
generate the files using normal target. This has the huge advantage
that they are rebuilt if their input changes. While we are at it,
also add dependencies on the vendor entity files.
This also fixes the path to the vendor script, which was given
relatively before, which obviously won't work when running from
inside a deeper subdirectory.
To speed things up, pass the --vendor option to getinfo, so we
do not have to find out the current vendor in getinfo all over
again.
CMake: Cache CURRENT_VENDOR and make it configurable
Cache the current vendor, so we do not have to rerun getinfo when
reconfiguring stuff. This also has the nice effect of making the
vendor configurable, so you can manually specify it on a platform
that might not have dpkg (not that building without dpkg works
yet).
This can be used to query a field for a specific vendor. It
also speeds up things a lot if we can cache the current vendor
in cmake and pass it to further getinfo invocations.
If we receive an interrupt, set a flag and do not abort
immediately without waiting for the child. Once the child
exited, exit with an error if the interrupted flag is set.
CMake: debian: Switch packaging over to CMake and dh 9
This new packaging is much easier to read, although the duplication
in the install files is a bit annoying. We should probably also get
rid of the movefiles for solvers, planners, and https method; but
then we have to keep track of which methods exist in the apt package.
Another disadvantage is that building only the documentation packages
also requires building the code, as there's no way to turn off code
building in the project.
This early support seems a bit hacky, but it's a hard switch: The
integration tests do not understand the old build system anymore
afterwards. I don't really like that.
CMake: Add initial support for documentation building
Build HTML docbook guides (untranslated) and manual pages
(including translations). Also install the examples in the
example subdirectory.
Translation of docbook guides has not been implemented yet,
but should be easy to do. The code also needs some cleanup
to automatically detect the available translations.
CMake: Add support for building and installing .mo files
Introduce support for building translation domain-specific
templates, merging them with the translations, and building
a language-specific .mo file.
The invocation of xgettext is done in the project source
directory, not in the current source directory, and all paths
are made relative to the project root, in order to have clean
templates.
This only supports the C++ source code for now, it unfortunately
does not handle the shell scripts of deselect yet.
prepare-release: Also search for libraries in CMake locations
With this change, the 'library' command looks for a library libX
in the directories: build/bin, */X, X.
This allows it to find the library when building with the
upcoming CMake backend, which places the libraries in a sub
directory of the build tree with the same name as the source
tree.
For example, if building in 'build/', the apt-pkg library
will be available at 'build/apt-pkg/libapt-pkg.so.5.0'.
In case there are multiple instances of a library,
the newest one will be used.
vendor/getinfo: Provide command to determine vendor to use
Introduce the 'current' command to eventually replace the current
symbolic link. The current command does roughly the same as the
makefile, the code has just been cleaned up a bit to work better
as a shell function.
Commit b559d4846018c3adac362c6f1d0d697956586208 updated the
documentation to refer to apt.systemd.daily instead of the
apt cron job, introducing fuzzy strings in all the translations.
apt-key: ignore any error produced by gpgconf --kill
gpgconf wasn't always equipped with a --kill option as highlighted by
our testcases failing on Travis and co as these use a much older version
of gpg2. As this is just for cleaning up slightly faster we ignore any
error a call might produce and carry on. Use a recent enough gpg2
version if you need the immediate killing…
apt-key has (usually) no secret key material so it doesn't really need
the agent at all, but newer gpgs insist on starting it anyhow. The
agents die off rather quickly after the underlying home-directory is
cleaned up, but that is still not fast enough for tools like sbuild
which want to unmount but can't as the agent is still hanging onto a
non-existent homedir.
edsp: try to read responses even if writing failed
If a solver/planner exits before apt is done writing we will generate
write errors. Solvers like 'dump' can be pretty quick in failing but
produce a valid EDSP error report apt should read, parse and display
instead of just discarding even through we had write errors.
if the FileFd failed already following calls should fail, too
There is no point in trying to perform Write/Read on a FileFd which
already failed as they aren't going to work as expected, so we should
make sure that they fail early on and hard.
Simulations are frequently run by unprivileged users which naturally
don't have the permissions to write to the default location for the eipp
file. Either way is bad as running in simulation mode doesn't mean we
don't want to run the logging (as EIPP runs the same regardless of
simulation or 'real' run), but showing the warnings is relatively
pointless in the default setup, so, in case we would produce errors and
perform a simulation we will discard the warnings and carry on.
Running apt with an external planner wouldn't have generated these
messages btw.
If another file in the transaction fails and hence dooms the transaction
we can end in a situation in which a -patched file (= rred writes the
result of the patching to it) remains in the partial/ directory.
The next apt call will perform the rred patching again and write its
result again to the -patched file, but instead of starting with an empty
file as intended it will override the content previously in the file
which has the same result if the new content happens to be longer than
the old content, but if it isn't parts of the old content remain in the
file which will pass verification as the new content written to it
matches the hashes and if the entire transaction passes the file will be
moved the lists/ directory where it might or might not trigger errors
depending on if the old content which remained forms a valid file
together with the new content.
This has no real security implications as no untrusted data is involved:
The old content consists of a base file which passed verification and a
bunch of patches which all passed multiple verifications as well, so the
old content isn't controllable by an attacker and the new one isn't
either (as the new content alone passes verification). So the best an
attacker can do is letting the user run into the same issue as in the
report.
The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the
assumption that the pipeline will always be at least one item short each
time it is called, but the logs in #832113 suggest that this isn't
always the case. I fail to see how at the moment, but the old
implementation had this behavior, so restoring it can't really hurt, can
it?
We read the entire input file we want to patch anyhow, so we can also
calculate the hash for that file and compare it with what he had
expected it to be.
Note that this isn't really a security improvement as a) the file we
patch is trusted & b) if the input is incorrect, the result will hardly be
matching, so this is just for failing slightly earlier with a more
relevant error message (althrough, in terms of rred its ignored and
complete download attempt instead).
The flush call is a no-op in most FileFd implementations so this isn't
as critical as it might sound as the only non-trivial implementation is
in the buffered writer, which tends not be used to buffer another
buffer…
APT doesn't know which packages will be triggered in the course of
actions, so it can't plan to see them for progress beforehand, but if it
sees that dpkg says that a package was triggered we can add additional
states. This is pretty much magic – after all it sets back the progress
– and there are cornercases in which this will result in incorrect
totals (package in partial states may or may not loose trigger states),
but the worst which can happen is that the progress is slightly
incorrect and doesn't reach 100%, but so be it. Better than being stuck
at 100% for a while as apt isn't realizing that a bunch of triggers
still need to be processed.
The progress reporting for a package sheduled for purging only included
the states dpkg passes through while actually purging the package – if
the package was fully installed before dpkg will pass first through all
remove states before purging it, so in the interest of consistent
reporting our progress reporting should do that, too.