[Slackbuilds-users] Call for Bug Fixes, Patches, etc

Kyle Guinn elyk03 at gmail.com
Mon Mar 14 04:10:16 UTC 2016


On 3/12/16, Erik Hanson <erik at slackbuilds.org> wrote:
> On Sun, 13 Mar 2016 00:23:17 +0000
> "Ryan P.C. McQuen" <ryan.q at linux.com> wrote:
>
>
>> Erik,
>>
>> Are you opposed to this solution?
>>
>> find -L . \
>>  \( -perm 777 -o -perm 775 -o -perm 750 -o -perm 711 -o -perm 555 \
>>   -o -perm 511 \) -print0 | \
>>   xargs -0 chmod 755
>> find -L . \
>>  \( -perm 666 -o -perm 664 -o -perm 640 -o -perm 600 -o -perm 444 \
>>   -o -perm 440 -o -perm 400 \) -print0 | \
>>   xargs -0 chmod 644
>>
>> It was proposed by B. Watson here:
>>
>> https://lists.slackbuilds.org/pipermail/slackbuilds-users/2015-November/015210.html
>>
>> It seems to work the same as the current template, but is just less
>> resource intensive ... but I don't know for sure.
>
> I honestly thought we had agreed to make this change, that another
> admin (rworkman?) agreed too. I read through that thread just now and I
> don't see it.. so maybe it happened on IRC, or not at all.
>
> Maybe another admin, rworkman or willysr, could put forward their
> thoughts on this change. I don't immediately see any problems with it,
> except possibly the amount of arguments chmod can take? (which may
> actually be a bash limitation) I suppose we know the answer to that
> since all 18k+ files from the mame source worked fine? If that seems to
> be the truth, as in all of those files got hit, and not say 100 or 1000
> or whatever the command line limit might be, then I'm in favor of this
> change. I'm not really in a position to run some tests to find out,
> though.
>
> If any of that sounds confusing, what I'm imagining is passing 18k+
> file names to chmod, and I seem to remember that type of thing being an
> issue in the past. I apologize in advance if that isn't true or I'm
> being ignorant on exactly what's happening in the above code.

What I've found in the past is that firing off a new chmod process for
each file was the slowest part.  The simplest way to fix it is to
change '\;' to '+' so you only fire off a few processes; each process
receives several filenames instead of just one.  Theoretically it
should be faster than piping it to xargs since you don't have to set
up the pipe and start the xargs process and parse the contents of the
pipe.

Take a look at "Limits on size of arguments and environment" in the
execve man page.  Both xargs and '-exec {} +' should be aware of these
limits when they start up a new process.  I believe they respect
ARG_MAX, not sure about any other limits.  Any limits in bash
shouldn't be involved in this command, and I'd really doubt that chmod
has some built-in limit.  I'm guessing that if you hit a limit in bash
in the past, it was due to file glob expansion.

But with that said, I'm still a big proponent of the single-line chmod
command.  I would guess that only a handful of scripts need to
preserve some weird permissions, and if upstream is taking enough care
to put those permissions in the tarball, then should we really be
trying to sanitize them?

-Kyle


More information about the SlackBuilds-users mailing list