new quiet level -qq for apt to hide progress output
-q is for logging and -qqq (old -qq) basically kills every output expect
errors, so there should be a way of declaring a middleground in which
the output of e.g. 'update' isn't as verbose, but still shows some
things. The test framework was actually making use of by accident as it
ignored the quiet level in output setup for apt before.
Eventually we should figure out some better quiet levels for all tools…
That is one huge commit with busy work only: Help messages used to be
one big translateable string, which is a pain for translators and hard
to reuse for us. This change there 'explodes' this single string into
new string for each documented string trying hard to split up the
translated messages as well. This actually restores many translations as
previously adding a single command made all of the bug message fuzzy.
The splitup also highlighted that its easy to forget a line, duplicate
one and similar stuff.
disable updating insecure repositories in apt by default
apt is an interactive command and the reasons we haven't this option set
for everything is mostly in keeping compatibility for a little while
longer to allow scripts to be changed if need be.
revamp apt(8) to refer more instead of duplicating
As apt is targetted at users, lets try to make apt(8) for users as well
by giving only a quick overview about what is available and some
pointers for how to find a whole lot more details.
centralize 'show' implementation of apt and apt-cache
The show commands have different styles in both binaries as the audience
is potentially very different, but that doesn't mean we need to separate
the implementation especially as they are slightly similar. This also
allows us to switch between the different show versions at runtime via
an option.
Especially with apt now, it can be useful to set an option only for apt
and not for apt-get. Using a binary-specific subtree which is merged into
the root seems like a simple enough trick to achieve this.
suggest 'apt autoremove' to get right of unneeded packages
The bugreport is more conservative in asking for a conditional, but
given that this is a message intended to be read by users to be run by
users we should suggest using a command intended to be used by users.
And while we are at, add sudo to the message – conditional of course.
refer to apt-secure(8) in unsecure repositories warning
The manpage is also slightly updated to work better as a central hub to
push people from all angles into the right directions without writting a
book disguised as an error message.
rework errors and warnings around insecure repositories
Insecure (aka unsigned) repositories are bad, period. We want to get
right of them finally and as a first step we are printing scary
warnings. This is already done, this commit just changes the messages to
be more consistent and prevents them from being displayed if
authenticity is guaranteed some other way (as indicated with
trusted=yes).
The idea is to first print the pure fact like "repository isn't signed"
as a warning (and later as an error), while giving an explaination in a
immediately following notice (which is displayed only in quiet level 0:
so in interactive use, not in scripts and alike).
set failreasons similar to connect.cc based on curl errors
Detecting network errors has some benefits in the acquire system as if
we can't connect to a host trying it for a million files is pointless.
http and co which use connect.cc deal with this, but https which uses
curl had connection failures as "normal" errors which could potentially
be worked around (like trying Release instead of the failed InRelease).
The main part is refactoring through to allow hiding the magic needed to
support .deb files in deeper layers of libapt so that frontends have
less exposure to Debian specific classes like debDebPkgFileIndex.
unbreak the copy-method claiming hashsum mismatch since ~exp9
Commit 653ef26c70dc9c0e2cbfdd4e79117876bb63e87d broke the camels back in
sofar that everything works in terms of our internal use of copy:/, but
external use is completely destroyed. This is kinda the reverse of what
happened in "parallel" in the sid branch, where external use was mostly
fine, internal and external exploded on the GzipIndexes option.
We fix this now by rewriting our internal use by letting copy:/ only do
what the name suggests it does: Copy files and not uncompress them
on-the-fly. Then we teach copy and the uncompressors how to deal with
/dev/null and use it as destination file in case we don't want to store
the uncompressed files on disk.
tests: change test-skipping detection for arch-specific pkgs
dpkg-checkbuilddeps changed its exitcodes in the recent past so the
old check always fails now skipping the test. Lets try a slightly more
stable (at least assume it to be) variant of detecting this.
drop privileges in file:// method as we do for decompressors
We drop it in decompressors, which are the natural next step, so if an
archive is used which isn't worldreadable (= not accessible by _apt) it
doesn't work anyway, so we just fail a bit earlier now and avoid all the
bad things which can happen over file (which could very well still be a
network resourc via NFS mounts or similar stuff, so hardly as safe as
the name might suggest at first).
allow all dpkg selections to be set via apt-mark and libapt
As we have support for 'hold', we need support for undoing a hold which
in effect means that we implemented most other states as well, just that
they weren't exposed in the interface directly so far.
We had this code lying around in apt-mark for a while now, but other
frontends need this (and similar) functionality as well, so its high
time that we provide a public interface in libapt for this stuff.
We have a few places and there will be a few more still where we have to
call dpkg to detect/set certain features or settings. Centralizing the
calling infrastructure now seems like a good idea before we add another.
switch 'apt-mark hold' from Pkg to Ver based operation
Users hold a package foo (at version X) or try to prevent the
installation of foo (usually based on the information they know about
version X), even if we say that we "hold a package". Conceptionally we
also need to know about which architecture we are talking and that is an
information bound to a version (as a package can change architecture
over time).
We internally did this lookup from Pkg to Ver already, we just move this
to a central place where the user has a change to influence it now.
add cacheset push_back wrapping for std::back_inserter
As usual by now, not all containers wrapped by the cacheset containers
support all methods, like push_back now, but they fail on use of these
unusable methods only.
Would be nice to not expose these methods for unsupporting containers at
all, but that means either a lot of classes or a lot of std::enable_if
magic, which seems like too big work for this small wrapper for now.
Technically an abi-break as we change a template parameter to
std::iterator for this, but this class is empty in all instances and
just causes the right typedefs to be set – which were incorrect as
detected by std::stable_partition as its implementation uses ::pointer
and needs also a operator* implementation.
In practice CacheSets have no external users (yet) and the difference is
visible only at compile time (which was an error before and now works),
not while linking.
The changes to apt-mark are functionally identical to the code before,
just that we use a std:: algorithm now instead of trying hard on our
own.
Some codepaths need to check if the system (in our case usually dpkg)
supports MultiArch or not. We had copy-pasted the check so far into
these paths, but having it as a system check is better for reusability.
These scripts currently produce HTML output that is directly
piped into an HTML file on alioth.
There are three categories. The first two check external
library calls to use the ones specified by POSIX to be
thread-safe. The main profile excludes functions that are
thread-safe on Linux or glibc in general, while the portable
output strictly follows posix.
The internal.html output lists internal function calls, such
as configuration setting.
This is supposed to be automated further at some point, so
we can automatically check for regressions.
cacheset: Fix compilation on new GCC in C++98 mode
Since gcc 4.9, the API for erase slightly changed. In
commit 3dddcdf2432e78f37c74d8c76c2c519a8d935ab2 the
existing checks for __cplusplus where changed to
check the gcc version, as the __cplusplus check
did nothing, because gcc 4.8 already provided the
standard value in there.
Fix the code to check for the gcc version in two
more places, and change the existing checks to
use a convenience macro.
.travis.yml: Add pinned vivid for gettext and clean up a bit
This adds a vivid pinned to -1, cleans up the file a bit by
removing duplicate commands, and finally installs gettext
with a new apt-get run that is passed -t vivid.
The syntax for the pinning is some weird YAML stuff I don't
want to think about...
tests: add a -j $jobs mode to test runner for parallel execution
Now that tests can be run in parallel, lets actually do it… The mode has
some downsides like not collecting the failed tests, but it can be a lot
faster than a sequential run and is therefore a good alternative in
testing those "this shouldn't break anything" changes (which tend to
break everything if untested).
We uses a small trick to implement the fallback: We make it so, that
by-hash is a special compression algorithm and apt already knows how to
deal with fallback between compression algorithms.
The drawback with implementing this fallback is that a) we are guessing
again and more importantly b) by-hash is only tried for the first
compression algorithm we want to acquire, not for all as before – but
flipping between by-hash and well-known for each compression algorithm
seems to be not really worth it as it seems unlikely that there will
actually be mirrors who only mirror a subset of compressioned files, but
have by-hash enabled.
The user-experience is the usual fallback one: You see "Ign" lines in
the apt update output. The fallback is implemented as a transition
feature, so a (potentially huge) mirror network doesn't need a flagday.
It is not meant as a "someday we might" or "we don't, but some of our
mirrors might" option – we want to cut down on the 'Ign' lines front so
that they become meaningful – if we wanted to spam everyone with them, we
could enable by-hash by default for all repositories…
sources.list and config options are better suited for this.
add by-hash sources.list option and document all of by-hash
This changes the semantics of the option (which is renamed too) to be a
yes/no value with the special additional value "force" as this allows
by-hash to be disabled even if the repository indicates it would be
supported and is more in line with our other yes/no options like pdiff
which disable themselves if no support can be detected.
The feature wasn't documented so far and hasn't reached a (un)stable
release yet, so changing it without trying too hard to keep
compatibility seems okay.
Not all tests work yet, most notable the cdrom tests, but those require
changes in libapt itself to have a proper fix and what we have fixed so
far is good enough progress for now.
deal with spaces in path, command and filepaths in apt-key
Filenames we get could include spaces, but also the tmpdir we work in
and the failures we print in return a very generic and unhelpful…
Properly supporting spaces is a bit painful as we constructed gpg
command before, which is now moved to (multilevel) calls to temporary
scripts instead.
This is mostly a small speedup for the testcases, but it is also handy
to document which tests actually deal with a specific hash compared to
those which 'just' need some hash which can be important while adding
new hashes.
do not report deprecate warnings for the None declaration
This is defined for compatibility, warning about it is intended, but
only in places where it is actually used, rather than at the place we
declare it for compatability…
Setting CXXFLAGS like --coverage on the commandline fails if we set the
std too late, so if we set it with the compiler name we set it always
first. A bit hacky as it bends the expectation, but seems to work.
tests: use more 'native' instead of 'amd64' if possible
The tests usually run on amd64 boxes, but once in a while I run it on a
(slow) armel box as well, which has its fair share of problems with some
tests, but at least the low hanging fruits can be dealt with: Do not
assume that amd64 is the native dpkg architecture – instead use whatever
dpkg thinks is native as architecture for the test.
avoid using global PendingError to avoid failing too often too soon
Our error reporting is historically grown into some kind of mess.
A while ago I implemented stacking for the global error which is used in
this commit now to wrap calls to functions which do not report (all)
errors via return, so that only failures in those calls cause a failure
to propergate down the chain rather than failing if anything
(potentially totally unrelated) has failed at some point in the past.
This way we can avoid stopping the entire acquire process just because a
single source produced an error for example. It also means that after
the acquire process the cache is generated – even if the acquire
process had failures – as we still have the old good data around we can and
should generate a cache for (again).
There are probably more instances of this hiding, but all these looked
like the easiest to work with and fix with reasonable (aka net-positive)
effects.
include debug information in the autoremove-kernels file
Figuring out after the fact what went wrong in the kernel hook is kinda
hart, also as the bugreports are usually very lacking on the details
front. Collecting the internal variables in the debug output we attach
to the generated file might help shine some light on the matter.
It's at least not going to hurt…
do not discard new manual-bits while applying EDSP solutions
In private-install.cc we call MarkInstall with FromUser=true, which sets
the bit accordingly, but while applying the EDSP solution we call mark
install on all packages with FromUser=false, so MarkInstall believes
this install is an automatic one and sets it to auto – so that a new package
which is explicitely installed via an external solver is marked as auto
and is hence also up for garbage collection in a following call.
Ideally MarkInstall wouldn't reset it, but the detection is hard to do
without regressing in other cases – and ideally ideally MarkInstall
wouldn't deal with the autobit at all – so we work around this on the
calling side for now.
implement autobit and pinning in EDSP solver 'apt'
The parser creates a preferences as well as an extended states file
based on the EDSP scenario file, which isn't the most efficient way of
dealing with this as thes text files have to be parsed again by another
layer of the code, but it needs the least changes and works good enough
for now. The 'apt' solver is in the end just a test solver like dump.
These assumptions were once true, but they aren't anymore, so what is
supposed to be a speed up is effectively a slowdown [not that it would
be noticible].
Usage of SingleArchFindPkg was nuked in a stable update already as the
included assumption was actually harmful btw, which is why we should get
right of other 'non-harmful' but still untrue assumptions while we can.
select kernels to protect from autoremove based on Debian version
This is basically a rewrite of the script with the general idea of
finding the Debian version of the installed kernels – as multiple
flavours will have the same Debian version – select the two newest of
them and translate them back to versions found in package names.
This way we avoid e.g. kernel and kernel-rt to use up the protected
slots even through they are basically the same kernel (just a different
flavour) so it is likely that if kernel doesn't work for some reason,
kernel-rt will not either.
This also deals with foreign kernel packages, kernels on hold and partly
installed kernels (in case multiple kernels are installed in the same
apt run) in a hopefully sensible way.
copy ReadWrite-error to the bottom to make clang happy
clang detects that fd isn't set in the ReadWrite case – just that this
is supposed to be catched earlier in this method already, but it doesn't
hurt to make it explicit here as well and clang is happy, too.
As said in the bugreport, this is hardly a serious problem on a security
front, but it was always on the list to have the filename configurable
somehow and the stable filename is a problem for parallel executions.
Using an environment variable (APT_EDSP_DUMP_FILENAME) for this is more
or less the best we can do here as solvers do not get told about our
configuration and such.
Pipes and such have no good Size value, but we still want to copy from
it maybe and we don't really need size as we can just as well read as
long as we get data out of a file to copy it.
The syntax of "Source" is different in EDSP compared to the the field of
the same name in 'the rest' of Debian, so documented this accordingly
and send the version as a new field.
implement dpkgs vision of interpreting pkg:<arch> dependencies
How the Multi-Arch field and pkg:<arch> dependencies interact was
discussed at DebConf15 in the "MultiArch BoF". dpkg and apt (among other
tools like dose) had a different interpretation in certain scenarios
which we resolved by agreeing on dpkg view – and this commit realizes
this agreement in code.
As was the case so far libapt sticks to the idea of trying to hide
MultiArch as much as possible from individual frontends and instead
translates it to good old SingleArch. There are certainly situations
which can be improved in frontends if they know that MultiArch is upon
them, but these are improvements – not necessary changes needed
to unbreak a frontend.
The implementation idea is simple: If we parse a dependency on foo:amd64
the dependency is formed on a package 'foo:amd64' of arch 'any'. This
package is provided by package 'foo' of arch 'amd64', but not by 'foo'
of arch 'i386'. Both of those foo packages provide each other through
(assuming foo is M-A:foreign) to allow a dependency on 'foo' to be
satisfied by either foo of amd64 or i386. Packages can also declare to
provide 'foo:amd64' which is translated to providing 'foo:amd64:any' as
well.
This indirection over provides was chosen as the alternative would be to
teach dependency resolvers how to deal with architecture specific
dependencies – which violates the design idea of avoiding resolver
changes, especially as architecture-specific dependencies are a
cornercase with quite a few subtil rules. Handling it all over versioned
provides as we already did for M-A in general seems much simpler as it
just works for them.
This switch to :any has actually a "surprising" benefit as well: Even
frontends showing a package name via .Name() [which doesn't show the
architecture] will display the "architecture" for dependencies in which
it was explicitely requested, while we will not show the 'strange' :any
arch in FullName(true) [= pretty-print] either. Before you had to
specialcase these and by default you wouldn't get these details shown.
The only identifiable disadvantage is that this complicates error
reporting and handling. apt-get's ShowBroken has existing problems with
virtual packages [it just shows the name without any reason], so that
has to be worked on eventually. The other case is that detecting if a
package is completely unknown or if it was at least referenced somewhere
needs to acount for this "split" – not that it makes a practical
difference which error is shown… but its one of the improvements
possible.
M-A: allowed pkgs of unconfigured archs do not statisfy :any
We parse all architectures we encounter recently, which means we also
parse packages from architectures which are neither native nor foreign,
but still came onto the system somehow (usually via heavy force).
store ':any' pseudo-packages with 'any' as architecture
Previously we had python:any:amd64, python:any:i386, … in the cache and
the dependencies of an amd64 package would be on python:any:amd64, of an
i386 on python:any:i386 and so on. That seems like a relatively
pointless endeavor given that they will all be provided by the same
packages and therefore also a waste of space.