nl_langinfo is used to acquire the YESEXPR of the language used,
but it will return the one from LC_MESSAGES, which might be different
from the language chosen for display of the question (based on LANGUAGE)
so this commit removes the [Y/n] help text from the questions itself and
moves it to the prompt creation in which the usage of LC_MESSAGES is
forced for it, so that the helptext shown actually represents the
characters accepted as input for the question.
There is still room for problems of course starting with an untranslated
"[Y/n]" but a translated YESEXPR or the problem that the question is
asked in a completely different language which might have a conflicting
definition of [Y/n] input or the user simple ignores the helptext and
assumes that an answer matching the question language is accepted, but
the mayority of users will never have this problem to begin with, so we
should be fine (or at least a bit finer than before).
Closes nothing really, but should at least help a bit with bugs like
deb:194614, deb:471102, lp:1205578, and countless others.
Lintian complains about these links in the source package as they leave
the source directory and as they are autogenerated there isn't that much
sense in shipping them, we can just recreate them before calling
configure.
And while at it ensure that this can't happen again by letting the build
fail in case a po file is available but the language isn't mentioned in
the LINGUAS file (not even as a disabled language).
request absolute URIs from proxies again (0.9.9.3 regession)
Commit 2b9c9b7f28b18f6ae3e422020e8934872b06c9f3 not only removes
keep-alive, but also changes the request URI send to proxies which are
required to be absolute URIs rather than the usual absolute paths.
pick up Translation-* even if only compressed available
On CD-ROMs Translation-* files are only in compressed form included in
the Release file. This used to work while we had no record of
Translation-* files in the Release file at all as APT would have just
guessed the (compressed) filename and accepted it (unchecked), but now
that it checks for the presents of entries and if it finds records it
expects the uncompressed to be verifiable.
This commit relaxes this requirement again to fix the regression.
We are still secure "enough" as we can validate the compressed file we have
downloaded, so we don't loose anything by not requiring a hashsum for
the uncompressed files to double-check them.
Michael Vogt [Tue, 23 Jul 2013 18:09:05 +0000 (20:09 +0200)]
debian/apt.postinst:
* debian/apt.postinst:
- run /etc/kernel/postinst.d/apt-auto-removal once on upgrade
to ensure that the correct auto-removal list is generated
(closes: #717615)
fix 'apt-cache search' crash with missing description
Beside the earlier fixed 'apt-cache show', 'showpkg' and 'search' deal
with descriptions. 'showpkg' was fixed by fixing the cache generation
for 'show', but 'search' still segfaulted.
On the upside, it doesn't segfault any longer, on the downside, if a
package has no description at all (aka: not in the Packages file and not
in a Translation-* file) the package can't be found with 'search', even
if we search only by name. That is a shortcoming in the code, but fixing
it means rewriting it completely for dubious gain at best.
So this commit just skips packages without a description and is done.
skip all Description fields in apt-cache, not just first
Given a Packages file like:
[…]
Description: foo bar baz
moo moo moo
Multi-Arch: foreign
Description-md5: 97e204a9f4ad8c681dbd54ec7c505251
[…]
We have to display the Multi-Arch flag field as well as the fields
after the Description-md5, but not this field itself, as we already
have one printed alongside the Description we display.
fix if-clause to generate hook-info for 'rc' packages
The code incorrectly skips printing of current version information,
if the package has no current version (for APT, but for dpkg as it is
the case for packages which are removed but not purged) by using an
unintended "else if" rather than an "if".
prevent MarkInstall of unsynced Multi-Arch:same siblings
Multi-Arch: same packages can be co-installed, but need to have the same
version for all installed packages (aka "siblings"). Otherwise the
unsynced versions will fight against each other and the auto-install as
wel as the problem resolver will later have to decide between holding the
packages or to remove one of the siblings (usually a foreign) taking a
bunch of packages (like the entire foreign setup) with them.
The idea here is now to be more pro-active: MarkInstall will fail for
a package if the siblings aren't synced, so we don't allow a situation
in which a resolver has to decide if to hold or to remove-upgrade under
the assumption that the remove-upgrade decision is always wrong and
doesn't deserve to be explored (expect valid out-of-syncs of course).
Thats a pretty bold move to take for a library which is used by
different solvers so this check is done in IsInstallOk and can be
overridden if front-ends want to.
Default is to acquire all architectures from APT::Architectures which
can be changed by arch=, but this isn't very flexible if you want
"mostly" the default as you have to hardcode the architectures then,
so arch-= and arch+= can be used to add/remove architectures from the
default set.
On a machine with 'amd64' and 'i386' configured the lines:
deb [arch+=armel] http://example.org/debian wheezy rocks
deb [arch-=amd64] http://example.org/debian jessie rocks
will result in the download of:
wheezy Packages for 'amd64', 'i386' and 'armel'
jessie Packages for 'i386'
Version 3 for DPkg::Pre-Install-Pkgs with MultiArch info
Adds on top of Version 2 to all displayed version numbers the
architecture as well as the MultiArch flag for consumption by the hooks.
Most of the time the architecture will be the same for both versions
displayed, but packages might change from "all" to "any" (or back)
between versions so we can't display the architecture for packages.
Pseudo-Format for Version 3:
<name> <version> <arch> <m-a-flag> <compare> <version> <arch> <m-a-flag>
Adam Conrad [Tue, 6 Nov 2012 22:54:31 +0000 (15:54 -0700)]
* Fix up two things in debian/apt.auto-removal.sh:
- Use exact matches with $-terminated regexes, so we don't get
confusion between similarly-named kernel flavours.
- Keep linux-backports-modules in sync with installed kernels.
Conflicts:
configure.in
debian/changelog
doc/apt-verbatim.ent
Forking only after being ready to accept clients avoids running races
with the tests which sometimes failed on the first 'apt-get update'
(or similar) with the previous background-start and hope for the best…
The commit fixes also some oversight output-order changes in regards to
Description-md5 and (I-M-S) race conditions in various tests.
fix SHA2* cleanups to zero-out the complete context
GCC 4.8 is now clever enough to warn about:
contrib/sha2_internal.cc: In function ‘char* SHA256_End(SHA256_CTX*, char*)’:
contrib/sha2_internal.cc:656:31: warning: argument to ‘sizeof’ in ‘void*
memset(void*, int, size_t)’ call is the same expression as the destination;
did you mean to dereference it? [-Wsizeof-pointer-memaccess]
MEMSET_BZERO(context, sizeof(context));
So fix it as suggested. Its interesting though that the SHA2*
calculation as far as we need it works even without zeroing out.
The for-loop iterating over the DepIterators which need configuration
can (and will be in 'complicated' situations) run multiple times, so we
can't just GlobOr on the DepIterator as it modifies it, so that the next
iteration over the list ends up checking another dependency leading us
into a 'Internal error, packages left unconfigured. foopkg' maybe or we
are 'lucky' and calculate a solution which might break down the line
With the selfgrown splitting we got the problem of not recovering
from networks which just reply with invalid data like those sending
us login pages to authenticate with the network (e.g. hotels) back.
The good thing about the InRelease file is that we know that it must
be clearsigned (a Release file might or might not have a detached sig)
so if we get a file but are unable to split it something is seriously
wrong, so there is not much point in trying further.
The Acquire system already looks out for a NODATA error from gpgv,
so this adds a new error message sent to the acquire system in case
the splitting we do now ourselves failed including this magic word.
we have a test which required traditionally lighttpd to be executed
as it requires a webserver supporting some kind of URI rewriting.
Now with some lines of code our own webserver can do this and the
testcase can be enabled by default. This test hinted at the bug fixed
in the previous commit, so having more tests which can easily be run
is a good thing.
Before we download the 'new' InRelease file the old file will be moved
out of the way with the name 'foobar_InRelease.reverify', so if no
partial file for the 'new' file exists take the modification time from
this reverify file, so that if we get an IMS hit for the InRelease file
we can move back the reverify file as new file rather than downloading
the 'new' file even though we already have it.
We do the same for Release files and this happened to work until the
reverify renaming was corrected for InRelease files.
APT needs to acquire data in a secure fashion over an inherently
unsecure way, known as the internet, while communicating with
unreliable partners, known as webservers and proxies.
For your integration tests we so far relied on 'normal' webservers,
but all of them have certain quirks and none is able to provide us
with all quirks which can be observed in the wild and we therefore
have to test with, so this webserver isn't trying to be fast, secure
or feature complete, but to provide all the quirks we need in a
consistent way.
This webserver also makes the APT project self-contained, as it is now
able to generate, serve as well as acquire package indexes. ;)
try defaults if auto-detection failed in apt-cdrom
The default is to ask udev for location and mountpoints of CD-ROMs,
but the old way of specifying the mountpoint is still available and
is tried now in case udev doesn't find any CD-ROM.
It probably fails, too, but we get a bunch of error messages this
way and the user can get an idea of how to make his setup work even
if udev can't be convienced to return something useful.
do not blindly assume that all packages stanzas have a "Description:"
field in 'apt-cache show' as well as in the cache creation itself.
We instead assume now that if the stanza has a Description, it will not
be the first field as we look out for "\nDescription" to take care of
MD5sum as well as (maybe ignored) translated Descriptions embedded in
the package stanza.
ensure state-dir exists before coyping cdrom files
We do the same in the acquire system which handles the 'normal'
downloads, so do it here as well even though its unlikely anyone
will ever notice (beside testcases of course …)
fail in CopyFile if the FileFds have error flag set
Testing for global PendingErrors in users of CopyFile is incorrect
in so far as unrelated errors will prevent us from copying perfectly
fine files and checking for the validity of the files is just better
in CopyFiles as it already checks if files are at least opened.
Add also a higher-level error message to the error stack if it fails.
For testcases it might sometimes be handy to add trap-actions
before the general cleanup, e.g. if it has set directories read-
only which rm doesn't want to remove even with --force applied
(its fine with files though)
OpenDescriptor should autoclose fd always on error
OpenInternDescriptor failures would cause additional errors to be
generated by double-closing an fd. Other errors (although these
are generated if the method is used incorrectly, so unlikely)
didn't close the fd aswell.
don't explicitly init ExtractTar InFd with invalid fd
The default constructor of the FileFd will kick in anyway,
which will know that the Fd is invalid while with this explicit
call it must be assumed that the fd is in fact valid, which
might generate errors in the future
set Fail flag in FileFd on all errors consistently
Previously some errors would set the Fail flag while some didn't
without a clear reason as all errors leave a bad FileFd behind,
so we use a helper now to ensure that all errors set the flag.