Working with strings c-style is complicated and error-prune,
so by converting to c++ style we gain some simplicity and
avoid buffer overflows by later extensions.
Jérémy Bobbio [Tue, 10 Mar 2015 09:09:44 +0000 (10:09 +0100)]
stop displaying time of build in online help
As part of the “reproducible builds” effort [1], we have noticed that
apt could not be built reproducibly.
One issue is that it uses the __DATE__ and __TIME__ macros of the C
preprocessor to display the time of build in the online help. We believe
this information not to be really useful to users as they can always
look at the package data and metadata to figure it out.
The attached patch simply removes this information. All
non-documentation packages can then be built reproducibly with our
current experimental framework.
[David: changed the string slightly to be untranslateable as well]
We use test{success,failure} now all over the place in the framework, so
its only consequencial to do this in the situations in which we test for
a specific output as well.
Helmut Grohne [Mon, 9 Mar 2015 17:11:10 +0000 (18:11 +0100)]
parse arch-qualified Provides correctly
The underlying problem is that libapt-pkg does not correctly parse these
provides. Internally, it creates a version named "baz:i386" with
architecture amd64. Of course, such a package name is invalid and thus
this version is completely inaccessible. Thus, this bug should not cause
apt to accept a broken situation as valid. Nevertheless, it prevents
using architecture qualified depends.
This way the 'correct' version is carried over into the po files to
reflect which version they were built for rather than the version before
the current one.
(error) Same iterator is used with different containers
cppcheck reports this error, its not really a problem for us as the API
can actually deal with it via implicit conversion, but being explicit
can't hurt and the less reported errors the better.
properly implement pkgRecord::Parser for *.deb files
Implementing FileName() works for most cases for us, but other
frontends might need more and even for us its not very stable as
the normal Jump() implementation is pretty bad on a deb file and
produce errors on its own at times.
So, replacing this makeshift with a complete implementation by
mostly just shuffling code around.
Bug #778375 uncovered that https wasn't properly integrated in the class
family tree of http as it was supposed to be leading to a NULL pointer
dereference. Fixing this 'properly' was deemed to much diff for
practically no gain that late in the release, so commit 0c2dc43d4fe1d026650b5e2920a021557f9534a6 just fixed the synptom, while
this commit here is fixing the cause plus adding a test.
The implementation was needlessly complex and flawed through preventing
positive dependencies from gaining points like they did before these
commits making library transitions harder instead of simpler. It worked
out anyhow most of the time out of pure 'luck' (and other ways of
gaining points) or got miss attributed to being a temporary hick-up.
Your mileage may vary, but don't worry: There is more than one way to
do it, but our one size fits all is not a bigger hammer, but an entire
roundhouse kick! So brace yourself for the tl;dr: The limit is gone.*
Beware: This fixes also the problem that a double newline is
unconditionally added 'later' which is an overcommitment in case
the dsc filesize is limit-2 <= x <= limit.
* limited to numbers fitting into an unsigned long long.
Michael Vogt [Tue, 6 Jan 2015 09:54:24 +0000 (10:54 +0100)]
Add regression test for the previous commit
The issue was that https.cc never called URIStart(), one way to
detect this is that no download progress is generated without
this call. The test now checks for this and as a side-effect will
also ensure that we do not break download progress reporting and
Acquire::{http,https}::Dl-Limit accidently.
Michael Vogt [Mon, 5 Jan 2015 09:27:53 +0000 (10:27 +0100)]
Fix missing URIStart() for https downloads
Add a explicit ReceivedData to HttpsMethod that indicates when
we got data from the connection so that we can send URISTart()
to the parent.
This is needed because URIStart got moved in f9b4f12d from
the progress_callback to write_data() and it only checks for
Res.Size. In the old code if progress_callback is called by
libcurl (and sets Res.Size) before write_data is called then
URIStart() is never send. Making this a explicit ReceivedData
variable fixes this issue.
This results in an extra image package being removed from
APT::NeverAutoRemove, losing the intended effect of keeping the {current,
previous, latest} set of images installed.
Requiring a “.” in the package name tightens the matched package names
to those that are installing a specific version of the image, thus
eliding the meta-packages.
pass-through stdin fd instead of content if not a terminal
Commit 299aea924ccef428219ed6f1a026c122678429e6 fixes the problem of
not logging terminal in case stdin & stdout are not a terminal. The
problem is that we are then trying to pass-through stdin content by
reading from the apt-process stdin and writing it to the stdin of the
child (dpkg), which works great for users who can control themselves,
but pipes and co are a bit less forgiving causing us to pass everything
to the first child process, which if the sending part of the pipe is
e.g. 'yes' we will never see the end of it (as the pipe is full at some
point and further writing blocks).
There is a simple solution for that of course: If stdin isn't a terminal,
we us the apt-process stdin as stdin for the child directly (We don't do
this if it is a terminal to be able to save the typed input in the log).
always run 'dpkg --configure -a' at the end of our dpkg callings
dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.
The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.
The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).
Note that dpkg currently has a workaround implemented to allow upgrades
to jessie to be clean, so that the test works before and after. Also
note that test (compared to the one in the bug) drops the await test as
its is considered a loop by dpkg now.
If we have no controlling terminal opening a terminal will make this
terminal our controller, which is a serious problem if this happens to
be the pseudo terminal we created to run dpkg in as we will close this
terminal at the end hanging ourself up in the process…
The offending open is the one we do to have at least one slave fd open
all the time, but for good measure, we apply the flag also to the slave
fd opening in the child process as we set the controlling terminal
explicitely here.
This is a regression from 150bdc9ca5d656f9fba94d37c5f4f183b02bd746 with
the slight twist that this usecase was silently broken before in that it
wasn't logging the output in term.log (as a pseudo terminal wasn't
created).
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
(The tests and their binary helpers had to be slightly modified to
apply, but the patch to fix the issue itself is unchanged.)
If we have no controlling terminal opening a terminal will make this
terminal our controller, which is a serious problem if this happens to
be the pseudo terminal we created to run dpkg in as we will close this
terminal at the end hanging ourself up in the process…
The offending open is the one we do to have at least one slave fd open
all the time, but for good measure, we apply the flag also to the slave
fd opening in the child process as we set the controlling terminal
explicitely here.
This is a regression from 150bdc9ca5d656f9fba94d37c5f4f183b02bd746 with
the slight twist that this usecase was silently broken before in that it
wasn't logging the output in term.log (as a pseudo terminal wasn't
created).
Real webservers (like apache) actually send an error page with a 416
response, but our client didn't expect it leaving the page on the socket
to be parsed as response for the next request (http) or as file content
(https), which isn't what we want at all… Symptom is a "Bad header line"
as html usually doesn't parse that well to an http-header.
This manifests itself e.g. if we have a complete file (or larger) in
partial/ which isn't discarded by If-Range as the server doesn't support
it (or it is just newer, think: mirror rotation).
It is a sort-of regression of 78c72d0ce22e00b194251445aae306df357d5c1a,
which removed the filesize - 1 trick, but this had its own problems…
To properly test this our webserver gains the ability to reply with
transfer-encoding: chunked as most real webservers will use it to send
the dynamically generated error pages.
always run 'dpkg --configure -a' at the end of our dpkg callings
dpkg checks now for dependencies before running triggers, so that
packages can now end up in trigger states (especially those we are not
touching at all with our calls) after apt is done running.
The solution to this is trivial: Just tell dpkg to configure everything
after we have (supposely) configured everything already. In the worst
case this means dpkg will have to run a bunch of triggers, usually it
will just do nothing though.
The code to make this happen was already available, so we just flip a
config option here to cause it to be run. This way we can keep
pretending that triggers are an implementation detail of dpkg.
--triggers-only would supposely work as well, but --configure is more
robust in regards to future changes to dpkg and something we will
hopefully make use of in future versions anyway (as it was planed at the
time this and related options were implemented).
correct architecture detection for 'rc' packages for purge
We were already considering these cases, but the code was flawed, so
that packages changing architectures are incorrectly handled and hence
the wrong architecture is used to call dpkg with, so that dpkg says the
package isn't installed (which it isn't for the requested architecture).
properly handle already reinstall pkgs in ordering
The bugreport itself describes the case of the ordering code detecting a
loop where none is present, but the testcase finds also cases in which
there is actually a loop and we fail to realize it. --reinstall can be
considered an interactive command through and it usually doesn't
encounter such "hard" problems (= looping essentials), so this is less
serious than it sounds at first.
James McCoy [Fri, 28 Nov 2014 13:21:06 +0000 (14:21 +0100)]
support long keyids in "apt-key del" instead of ignoring them
apt-key given a long keyid reports just "OK" all the time, but doesn't
delete the mentioned key as it doesn't find the key.
Note: In debian/experimental this was closed with 29f1b977100aeb6d6ebd38923eeb7a623e264ffe which just added the testcase
as the rewrite of apt-key had fixed this as well.
We run dpkg on its own pty, so we can log its output and have our own
output around it (like the progress bar), while also allowing debconf
and configfile prompts to happen.
In commit 223ae57d468fdcac451209a095047a07a5698212 we changed to
constantly reopening the slave for kfreebsd. This has the sideeffect
though that in some cases slave and master will lose their connection on
linux, so that no output is passed along anymore. We fix this by having
always an fd referencing the slave open (linux), but we don't use it
(kfreebsd).
Failing to get our PTY up and running has many (bad) consequences
including (not limited to, nor all at ones or in any case) garbled ouput,
no output, no logging, a (partial) mixture of the previous items, …
This commit is therefore also reshuffling quiet a bit of the creation
code to get especially the output part up and running on linux and the
logging for kfreebsd.
Note that the testcase tries to cover some cases, but this is an
interactivity issue so only interactive usage can really be a good test.
Only "recent" versions of dpkg support stdin for merge instead of a
file, so as a quick fix we delay calling it until we really need it
which fixes most of the problem already.
Checking for a specific dpkg version here is deemed too much work, just
like using a temporary file here and depends a too high requirement for
this minor usecase. After all, it didn't work at all before, so we break
nobody here and can fix it if someone complains (with a patch).
We run dpkg on its own pty, so we can log its output and have our own
output around it (like the progress bar), while also allowing debconf
and configfile prompts to happen.
In commit 223ae57d468fdcac451209a095047a07a5698212 we changed to
constantly reopening the slave for kfreebsd. This has the sideeffect
though that in some cases slave and master will lose their connection on
linux, so that no output is passed along anymore. We fix this by having
always an fd referencing the slave open (linux), but we don't use it
(kfreebsd).
Failing to get our PTY up and running has many (bad) consequences
including (not limited to, nor all at ones or in any case) garbled ouput,
no output, no logging, a (partial) mixture of the previous items, …
This commit is therefore also reshuffling quiet a bit of the creation
code to get especially the output part up and running on linux and the
logging for kfreebsd.
Note that the testcase tries to cover some cases, but this is an
interactivity issue so only interactive usage can really be a good test.
While on linux files are created in /tmp with $USER:$USER, on my
kfreebsd testmachine they are created with $USER:root, so we pull some
strings here to make it work on both.
create our cache and lib directory always with mode 755
We autocreate for a while now the last two directories in /var/lib/apt/lists
(similar for /var/cache/apt/archives) which is very nice for systems having
any of those on tmpfs or other non-persistent storage. This also means
though that this creation is effected by the default umask, so for
people with aggressive umasks like 027 the directories will be created
with 750, which means all non-root users are left out, which is usually
exactly what we want then this umask is set, but the cache and lib
directories contain public knowledge. There isn't any need to protect
them from viewers and they render apt completely useless if not
readable.
Usually they don't provide a lot in terms of what they test, but they
help in covering many lines from strictly anecdotal commands (stats,
moo) and error messages, so that stuff which really needs to be tested,
but isn't is better visible in coverage reports.
reenable support for -s (and co) in apt-get source
The conversion to accept only relevant options for commands has
forgotten another one, so adding it again even through the usecase might
very well be equally good served by --print-uris.
A version belongs to a section and has hence a section member of its
own. A package on the other hand can have multiple versions from
different sections. This was "solved" by using the section which was
parsed first as order of sources.list defines, but that is obviously a
horribly unpredictable thing.
Users are way better of with the Section() as returned by the version
they are dealing with. It is likely the same for all versions of a
package, but in the few cases it isn't, it is important (like packages
moving from main/* to contrib/* or into oldlibs …).
Backport of 7a66977 which actually instantly removes the member.
Collect all hashes we can get from the source record and put them into a
HashStringList so that 'apt-get source' can use it instead of using
always the MD5sum.
We therefore also deprecate the MD5 struct member in favor of the list.
While at it, the parsing of the Files is enhanced so that records which
miss "Files" (aka MD5 checksums) are still searched for other checksums
as they include just as much data, just not with a nice and catchy name.
This is a cherry-pick of 1262d35 with some dirty tricks to preserve ABI.
APT supports more than just one HashString and even allows to enforce
the usage of a specific hash. This class is intended to help with
storage and passing around of the HashStrings.
The cherry-pick here the un-const-ification of HashType() compared to f4c3850ea335545e297504941dc8c7a8f1c83358. The point of this commit is
adding infrastructure for the next one. All by itself, it just adds new
symbols.
Do the same with less code in apt-get. This especially ensures that the
lock file (and the parent directories) exist before we are trying to
lock. It also means that clean now creates the directories if they are
missing so we returned to a proper clean state now.
We use it in other places already as well even though it is farly new
addition to the POSIX family with 2008, but rolling our own here is
really something which should be avoided in such a important method.
dpkg wants to know about a package before it can be put on hold, so we
have to at least hint about its existance in the available file it
"maintaince" to know about such stuff. The simple thing would probably
be to just feed all Packages files into dpkg as well, but what would be
the point really? Exactly, so we take a shortcut here and just create
dummies in the available file if we need to which isn't going to be that
common as usually you are holding packages back and not off.
Who would have thought that a simple feature like setting a package on
hold requires more than 200 lines of code… at least with the testcase it
is now explicitly tested code.
The tool checks for debconf version before printing the info it
extracts, so that it doesn't extract data which can't be interpreted
before debconf is upgraded. It is only fair to check for this behaviour
in the tests.
By convention, if I run a tool with --help or --version I expect it to
exit successfully with the usage, while if I do call it wrong (like
without any parameters) I expect the usage message shown with a non-zero
exit.
We have a d-pointer available here, so go ahead and use it which also
helps in hidding some dirty details here. The "hard" part is keeping the
abi for the inlined methods so that they don't break – at least not more
than before as much of the point beside a speedup is support for more
than 256 fields in a single section.
No idea what the intension was here, but it seems like a leftover from
a workover which happened to be done differently later. As it doesn't
provide anything at the moment we just revert to the previous abi here.
The change itself is no problem ABI wise, but the remove of the old
undynamic hashtables is, so we bring it back for older abis and happily
use the now available free space to backport more recent additions like
the dynamic hashtable itself.
explicit overload methods instead of adding parameters
Adding a new parameter (with a default) is an ABI break, but you can
overload a method, which is "just" an API break for everyone doing
references to this method (aka: nobody).
We have a bunch of classes which are of no use for the outside world,
but were still exported and so needed to preserve ABI/API. Marking them
as hidden to not export them any longer is a big API break in theory,
but in practice nobody is using them – as if they would its a bug.
We can't add a new virtual method without breaking the ABI, but we can
freely add new methods, so for older ABIs we just implement this method
with a dynamic_cast, so that clients can be more ignorant about the API
here and especially don't need to pull a very dirty trick by assuming
internal knowledge (like apt-get did here).
replace ignore-deprecated #pragma dance with _Pragma
For compatibility we use/provide and fill quiet some deprecated methods
and fields, which subsequently earns us a warning for using them. These
warnings therefore have to be disabled for these codeparts and that is
what this change does now in a slightly more elegant way.
Division by result of sizeof(). memset() expects a size in bytes
"did you intend to multiply instead?" is what cppcheck helpful says and
it is absolutely right. Doesn't make a whole lot of a difference though
as we are talking about 'char' in this testcase, but just to be sure.
(error) va_list 'args' was opened but not closed by va_end()
The manpage of va_start and co additionally says:
On some systems, va_end contains a closing '}' matching a '{' in
va_start, so that both macros must occur in the same function, and in a
way that allows this.
So instead of return/breaking instantly, we save the return, make a
proper turndown with va_end in all cases and only end after that.
The testcases have far worse problems if these ever end up being NULL
and/or are not given a value by the method called, but clang is right to
warn about it, just that we don't want to fix it in testcases…
One word: "doh!" Commit f6d4ab9ad8a2cfe52737ab620dd252cf8ceec43d
disabled the check to prevent apt from downloading bigger patches
than the index it tries to patch. Happens rarly of course, but still.
Detected by scan-build complaining about a dead assignment.
To make up for the mistake a test is included as well.