show potentially arch-qualified fullname in 'apt show'
We do not show the architecture as a dedicated field as this is rather
technical information, but as packagename it makes sense to show the
architecture as other part of apt will refer to it in this way.
do not sent Last-Modified if we expect a changed file
In 8d041b4f we made apt figure out based on the last Release file it has
if it should request a file or not given that the hashes changed or not.
So if we have a last Release file and do a request, do not sent a
Last-Modified header as we expect a change so much that a non-change
would indeed be an error. The Last-Modified header is therefore at best
ignored by the server, so sending it is just wasted effort. In the worst
case as time is a fragile thing the server decides against sending us an
update with the idea that we already have the latest content, which we
know for a fact that we haven't. Given that we sent less information to
the server our request is on its own also less identifiable as coming
from a returning or new user.
The disadvantage is that if we end up getting an old index file after
getting a new Release file from another mirror the old mirror will not
be able to tell us 'Hit', but instead sends us the complete file we
discard, but both lets us end up with the same error class in the end,
so the difference isn't big in practice.
do not segfault in cache generation on mmap failure
Out of memory and similar circumstanzas could cause MMap::Map to fail
and especially the mmap/malloc calls in it. With some additional
checking we can avoid segfaults and similar in such situations – at
least in theory as if this is a real out of memory everything we do to
handle the error could just as well run into a memory problem as well…
But at least in theory (if MMap::Map is made to fail always) we can deal
with it so good that a user actually never sees a failure (as the cache
it tries to load with it fails and is discarded, so that DynamicMMap
takes over and a new one is build) instead of segfaulting.
do not rerun ./configure causing FTCBFS with newer autotools-dev
If the config.{sub,guess} files we linked in were newer than our
configure script we ended up recreating configure and then rerun it
without all the configuration options which were (potentially) present
for a previous run.
We avoid this by changing to the same ruleset as in the debian/rules
file which compares the config.* files against a stamp file rather than
the configure script itself as its the configuration itself which
depends on all scripts, not configure on the config scripts.
While at it, we also drop the 'make -s dirs' call as we don't need to do
it explicitly here as proper dependencies will take care of it.
Thanks: Helmut Grohne for the detailed bugreport. Closes: 804923
The hack introduced in aa91826f is replaced with a hopefully better
working "proper" solution with a new variable just for the standard we
use everywhere we use CXXFLAGS.
In ce1f3a2c we started warning about failing unlinking, which we
consistently do for directories. That isn't a problem as directories
usually aren't in the places we do want to clean up – with the potential
exeception of "lost+found", so lets ignore it like we ignore our own
partial/ subdirectory.
support setting empty values (sanely) & removing support for
space-gapping: '-o option= value'
That is a very old feature (straight from 1998), but it is super
surprising if you try setting empty values and instead get error
messages or a non-empty value as the next parameter is treated as the
value – which could have been empty, so if for some reason you need a
compatible way of setting an empty value try: '-o option="" ""'.
I can only guess that the idea was to support '-o option value', but we
survived 17 years without it, we will do fine in the future I guess.
Similar is the case for '-t= testing' even through '-t testing' existed
before and the code even tried to detect mistakes like '-t= -b' … all
gone now.
Technically that is as its removing a feature replacing it with another
a major interface break. In practice I really hope for my and their
sanity that nobody was using this; but if for some reaon you do: Remove
the space and be done.
I found the patch and the bugreport actually only after the fact, but
its reassuring that others are puzzled by this as well and hence a
thanks is in perfect order here as the patch is practical identical
[expect that this one here adds tests and other bonus items].
Thanks: Daniel Hartwig for initial patch. Closes: 693092
do not use _apt for file/copy sources if it isn't world-accessible
In 0940230d we started dropping privileges for file (and a bit later for
copy, too) with the intend of uniforming this for all methods. The
commit message says that the source will likely fail based on the
compressors already – and there isn't much secret in the repository
content. After all, after apt has run the update everyone can access the
content via apt anyway…
There are sources through which worked before which are mostly
single-deb (and those with the uncompressed files available).
The first one being especially surprising for users maybe, so instead of
failing, we make it so that apt detects that it can't access a source as
_apt and if so doesn't drop (for all sources!) privileges – but we limit
this to file/copy, so the uncompress which might be needed will still
fail – but that failed before this regression.
We display a notice about this, mostly so that if it still fails (e.g.
compressed) the user has some idea what is wrong.
Notices are just hints, but if they are printed in tests, they should be
expected and if not the test should fail. No current test has this
problem, so that is just potential future proving.
"support" unsigned Release files without hashes again
This 'ignores' the component Release files you can find in Debian
alongside the binary-* directories, which isn't exactly a common
usecase, but it worked before, so lets support it again as this isn't
worse than a valid Release file which is unsigned.
allow acquire method specific options via Binary scope
Allows users who know what they are getting themselves into with this
trick to e.g. disable privilege dropping for e.g. file:// until they can
fix up the permissions on those repositories. It helps also the test
framework and people with a similar setup (= me) to run in less modified
environments.
drop privileges in copy:// method as we do for file://
Continueing on the track of dropping privileges in all methods, lets
drop it in copy, too, as the reasoning for it is very similar to file
and the interaction between the too quiet interesting as copy kinda
surfed as a fallback for file not being able to read the file. Both now
show a better error message as well as it was previously claiming to
have a hashsum mismatch, given that it couldn't read the file.
allow getaddrinfo flag AI_ADDRCONFIG to be disabled
This flags is generally handy to avoid having to deal with ipv6 results on an
ipv4-only system, but it prevents e.g. the testcases from working if the
testsystem has no configured address at the moment (expect loopback), so
allow it to be sidestepped and let the testcases sidestep it.
Unlinking /dev/null is bad, we shouldn't do that. Also, we should print
at least a warning if we tried to unlink a file but didn't manage to
pull it of (ignoring the case were the file is /dev/null or doesn't
exist in the first place).
This got triggered by a relatively unlikely to cause problem in
pkgAcquire::Worker::PrepareFiles which would while temporary
uncompressed files (which are set to keep compressed) figure out that to
files are the same and prepare for sharing by deleting them. Bad move.
That also shows why not printing a warning is a bad idea as this hide
the error for in non-root test runs.
ensure FileFd doesn't try to open /dev/null as atomic and co
The wrapping will fail in the best case and actually end up deleting
/dev/null in the worst case. Given that there is no point in trying to
write atomically to /dev/null as you can't read from it again just
ignore these flags if higher level code ends up trying to use them on
/dev/null.
ignore newlines in dpkg-deb control output for installing debs
Leading or trailing newlines can be confusing for our parser as it
expects two newlines to start/stop a new stanza. To solve this the lines
we wanna add are printed first, ignore any leading newlines and then add
the stanza as provided by dpkg-deb with or without trailing newlines as
the parser will look at the first stanza only anyway and removing
trailing newlines is considerably harder to do.
support arch:all data e.g. in separate Packages file
Based on a discussion with Niels Thykier who asked for Contents-all this
implements apt trying for all architecture dependent files to get a file
for the architecture all, which is treated internally now as an official
architecture which is always around (like native). This way arch:all
data can be shared instead of duplicated for each architecture requiring
the user to download the same information again and again.
There is one problem however: In Debian there is already a binary-all/
Packages file, but the binary-any files still include arch:all packages,
so that downloading this file now would be a waste of time, bandwidth
and diskspace. We therefore need a way to decide if it makes sense to
download the all file for Packages in Debian or not. The obvious answer
would be a special flag in the Release file indicating this, which would
need to default to 'no' and every reasonable repository would override
it to 'yes' in a few years time, but the flag would be there "forever".
Looking closer at a Release file we see the field "Architectures", which
doesn't include 'all' at the moment. With the idea outlined above that
'all' is a "proper" architecture now, we interpret this field as being
authoritative in declaring which architectures are supported by this
repository. If it says 'all', apt will try to get all, if not it will be
skipped. This gives us another interesting feature: If I configure a
source to download armel and mips, but it declares it supports only
armel apt will now print a notice saying as much. Previously this was a
very cryptic failure. If on the other hand the repository supports mips,
too, but for some reason doesn't ship mips packages at the moment, this
'missing' file is silently ignored (= that is the same as the repository
including an empty file).
The Architectures field isn't mandatory through, so if it isn't there,
we assume that every architecture is supported by this repository, which
skips the arch:all if not listed in the release file.
This was discussed a while ago on #debian-apt and now that I see myself
making this mistake lets bite the bullet and fix it in the easy way out
version: Using a new name which fits with a similar named setter and
deprecate the old method instead of 'hostily' changing API.
Removals in the acquire progress can be pretty important, so a failure
should be silently ignored, so we wrap our unlink call in a slightly
more forgiving wrapper checking things.
The general idea is: A small paragraph on the tool itself as a
description, a list of the most used (!= all) commands available in the
tool, a remark where to find more information on the tool and its
commands (aka: in the manpage) and finally a common block referring to
even more manpages. In exchange options are completely omitted from the
output as well as deprecated or obscure commands. (Better) Information
about them is available in the manpages anyway and the few options which
were listed before were also the least interesting ones (-o -c -q and co
are hardly of interest for someone totally new looking to find info by
asking for help and anyone with a bit of experience doesn't need this
short list. Those would need a list of options applying to the command
they call, but they are too numerous and command specific to list them
sanely in this context.
hidden support more apt-get/apt-cache commands in apt
apt is supposed to be a user-friendly interface, so while these commands
are usually poweruser material and therefore do not need to be shown in
general introduction manpages/help messages its of no use to not allow
users to use them.
This includes clean, autoclean, build-dep, source, download, changelog,
depends, rdepends and showsrc – it doesn't include more non-interactive
commands like dump or xvcg as those are usually used by scripts if at
all.
new quiet level -qq for apt to hide progress output
-q is for logging and -qqq (old -qq) basically kills every output expect
errors, so there should be a way of declaring a middleground in which
the output of e.g. 'update' isn't as verbose, but still shows some
things. The test framework was actually making use of by accident as it
ignored the quiet level in output setup for apt before.
Eventually we should figure out some better quiet levels for all tools…
That is one huge commit with busy work only: Help messages used to be
one big translateable string, which is a pain for translators and hard
to reuse for us. This change there 'explodes' this single string into
new string for each documented string trying hard to split up the
translated messages as well. This actually restores many translations as
previously adding a single command made all of the bug message fuzzy.
The splitup also highlighted that its easy to forget a line, duplicate
one and similar stuff.
disable updating insecure repositories in apt by default
apt is an interactive command and the reasons we haven't this option set
for everything is mostly in keeping compatibility for a little while
longer to allow scripts to be changed if need be.
revamp apt(8) to refer more instead of duplicating
As apt is targetted at users, lets try to make apt(8) for users as well
by giving only a quick overview about what is available and some
pointers for how to find a whole lot more details.
centralize 'show' implementation of apt and apt-cache
The show commands have different styles in both binaries as the audience
is potentially very different, but that doesn't mean we need to separate
the implementation especially as they are slightly similar. This also
allows us to switch between the different show versions at runtime via
an option.
Especially with apt now, it can be useful to set an option only for apt
and not for apt-get. Using a binary-specific subtree which is merged into
the root seems like a simple enough trick to achieve this.
suggest 'apt autoremove' to get right of unneeded packages
The bugreport is more conservative in asking for a conditional, but
given that this is a message intended to be read by users to be run by
users we should suggest using a command intended to be used by users.
And while we are at, add sudo to the message – conditional of course.
refer to apt-secure(8) in unsecure repositories warning
The manpage is also slightly updated to work better as a central hub to
push people from all angles into the right directions without writting a
book disguised as an error message.
rework errors and warnings around insecure repositories
Insecure (aka unsigned) repositories are bad, period. We want to get
right of them finally and as a first step we are printing scary
warnings. This is already done, this commit just changes the messages to
be more consistent and prevents them from being displayed if
authenticity is guaranteed some other way (as indicated with
trusted=yes).
The idea is to first print the pure fact like "repository isn't signed"
as a warning (and later as an error), while giving an explaination in a
immediately following notice (which is displayed only in quiet level 0:
so in interactive use, not in scripts and alike).
set failreasons similar to connect.cc based on curl errors
Detecting network errors has some benefits in the acquire system as if
we can't connect to a host trying it for a million files is pointless.
http and co which use connect.cc deal with this, but https which uses
curl had connection failures as "normal" errors which could potentially
be worked around (like trying Release instead of the failed InRelease).
The main part is refactoring through to allow hiding the magic needed to
support .deb files in deeper layers of libapt so that frontends have
less exposure to Debian specific classes like debDebPkgFileIndex.
unbreak the copy-method claiming hashsum mismatch since ~exp9
Commit 653ef26c70dc9c0e2cbfdd4e79117876bb63e87d broke the camels back in
sofar that everything works in terms of our internal use of copy:/, but
external use is completely destroyed. This is kinda the reverse of what
happened in "parallel" in the sid branch, where external use was mostly
fine, internal and external exploded on the GzipIndexes option.
We fix this now by rewriting our internal use by letting copy:/ only do
what the name suggests it does: Copy files and not uncompress them
on-the-fly. Then we teach copy and the uncompressors how to deal with
/dev/null and use it as destination file in case we don't want to store
the uncompressed files on disk.
tests: change test-skipping detection for arch-specific pkgs
dpkg-checkbuilddeps changed its exitcodes in the recent past so the
old check always fails now skipping the test. Lets try a slightly more
stable (at least assume it to be) variant of detecting this.
drop privileges in file:// method as we do for decompressors
We drop it in decompressors, which are the natural next step, so if an
archive is used which isn't worldreadable (= not accessible by _apt) it
doesn't work anyway, so we just fail a bit earlier now and avoid all the
bad things which can happen over file (which could very well still be a
network resourc via NFS mounts or similar stuff, so hardly as safe as
the name might suggest at first).
allow all dpkg selections to be set via apt-mark and libapt
As we have support for 'hold', we need support for undoing a hold which
in effect means that we implemented most other states as well, just that
they weren't exposed in the interface directly so far.
We had this code lying around in apt-mark for a while now, but other
frontends need this (and similar) functionality as well, so its high
time that we provide a public interface in libapt for this stuff.
We have a few places and there will be a few more still where we have to
call dpkg to detect/set certain features or settings. Centralizing the
calling infrastructure now seems like a good idea before we add another.
switch 'apt-mark hold' from Pkg to Ver based operation
Users hold a package foo (at version X) or try to prevent the
installation of foo (usually based on the information they know about
version X), even if we say that we "hold a package". Conceptionally we
also need to know about which architecture we are talking and that is an
information bound to a version (as a package can change architecture
over time).
We internally did this lookup from Pkg to Ver already, we just move this
to a central place where the user has a change to influence it now.
add cacheset push_back wrapping for std::back_inserter
As usual by now, not all containers wrapped by the cacheset containers
support all methods, like push_back now, but they fail on use of these
unusable methods only.
Would be nice to not expose these methods for unsupporting containers at
all, but that means either a lot of classes or a lot of std::enable_if
magic, which seems like too big work for this small wrapper for now.
Technically an abi-break as we change a template parameter to
std::iterator for this, but this class is empty in all instances and
just causes the right typedefs to be set – which were incorrect as
detected by std::stable_partition as its implementation uses ::pointer
and needs also a operator* implementation.
In practice CacheSets have no external users (yet) and the difference is
visible only at compile time (which was an error before and now works),
not while linking.
The changes to apt-mark are functionally identical to the code before,
just that we use a std:: algorithm now instead of trying hard on our
own.
Some codepaths need to check if the system (in our case usually dpkg)
supports MultiArch or not. We had copy-pasted the check so far into
these paths, but having it as a system check is better for reusability.
These scripts currently produce HTML output that is directly
piped into an HTML file on alioth.
There are three categories. The first two check external
library calls to use the ones specified by POSIX to be
thread-safe. The main profile excludes functions that are
thread-safe on Linux or glibc in general, while the portable
output strictly follows posix.
The internal.html output lists internal function calls, such
as configuration setting.
This is supposed to be automated further at some point, so
we can automatically check for regressions.
cacheset: Fix compilation on new GCC in C++98 mode
Since gcc 4.9, the API for erase slightly changed. In
commit 3dddcdf2432e78f37c74d8c76c2c519a8d935ab2 the
existing checks for __cplusplus where changed to
check the gcc version, as the __cplusplus check
did nothing, because gcc 4.8 already provided the
standard value in there.
Fix the code to check for the gcc version in two
more places, and change the existing checks to
use a convenience macro.
.travis.yml: Add pinned vivid for gettext and clean up a bit
This adds a vivid pinned to -1, cleans up the file a bit by
removing duplicate commands, and finally installs gettext
with a new apt-get run that is passed -t vivid.
The syntax for the pinning is some weird YAML stuff I don't
want to think about...
tests: add a -j $jobs mode to test runner for parallel execution
Now that tests can be run in parallel, lets actually do it… The mode has
some downsides like not collecting the failed tests, but it can be a lot
faster than a sequential run and is therefore a good alternative in
testing those "this shouldn't break anything" changes (which tend to
break everything if untested).
We uses a small trick to implement the fallback: We make it so, that
by-hash is a special compression algorithm and apt already knows how to
deal with fallback between compression algorithms.
The drawback with implementing this fallback is that a) we are guessing
again and more importantly b) by-hash is only tried for the first
compression algorithm we want to acquire, not for all as before – but
flipping between by-hash and well-known for each compression algorithm
seems to be not really worth it as it seems unlikely that there will
actually be mirrors who only mirror a subset of compressioned files, but
have by-hash enabled.
The user-experience is the usual fallback one: You see "Ign" lines in
the apt update output. The fallback is implemented as a transition
feature, so a (potentially huge) mirror network doesn't need a flagday.
It is not meant as a "someday we might" or "we don't, but some of our
mirrors might" option – we want to cut down on the 'Ign' lines front so
that they become meaningful – if we wanted to spam everyone with them, we
could enable by-hash by default for all repositories…
sources.list and config options are better suited for this.
add by-hash sources.list option and document all of by-hash
This changes the semantics of the option (which is renamed too) to be a
yes/no value with the special additional value "force" as this allows
by-hash to be disabled even if the repository indicates it would be
supported and is more in line with our other yes/no options like pdiff
which disable themselves if no support can be detected.
The feature wasn't documented so far and hasn't reached a (un)stable
release yet, so changing it without trying too hard to keep
compatibility seems okay.
Not all tests work yet, most notable the cdrom tests, but those require
changes in libapt itself to have a proper fix and what we have fixed so
far is good enough progress for now.
deal with spaces in path, command and filepaths in apt-key
Filenames we get could include spaces, but also the tmpdir we work in
and the failures we print in return a very generic and unhelpful…
Properly supporting spaces is a bit painful as we constructed gpg
command before, which is now moved to (multilevel) calls to temporary
scripts instead.