Discussion:
security by kees-cookity [rant]
(too old to reply)
Rainer Weikusat
2024-05-23 20:44:06 UTC
Permalink
or "Maybe, I'm full of it BUT I work for Google ... !"

I have program for executing another program with some fnctl-based file
locks held I've been using for almost 20 years. This program always
opens files with O_CREAT added to the mode so that, in case of multiple
concurrent calls of it and no previously existing file, one of these
calls creates the file and all others open it.

Further, I have a directory with the following permissions:

drwxrwx--T 6 root caca 4096 May 23 20:22 ca-CA/

as I want members of the caca group to be able to create files in this
directory but not remove files owned by root. Inside this directory,
there's a file with the following permissions:

-rw-r--r-- 1 caca caca 3 Jan 1 1970 serial

A shell script associated with that tries to acquire an R/W lock on this
file in order to serialize concurrent invocations of it. On Debian 12,
when running as root, this fails with EACCES. That's do to a suckurity
feature (supposed to make the system suck harder) documented as follows (proc(5)):

/proc/sys/fs/protected_regular (since Linux 4.19)

1 Don't allow O_CREAT open(2) on regular files that the caller doesn't
own in world-writable sticky directories, unless the regular file is
owned by the owner of the directory.

2 As for the value 1, but the restriction also applies to
group-writable sticky directories.

The intent of the above protections is similar to protected_fifos, but
allows an application to avoid writes to an attacker-controlled regular
file, where the application expected to create one.

and default value of this control knob is - for maxium suckurity because
Google says that's how it must be done and Google CANNOT be wrong!!! -
obviously 2.

Who told this hyperintelligent being that O_CREAT means "application
expected to create a file" AND NOT "create file if it doesn't exist
already", ie, the documented semantics?

GRRRR.
Kaz Kylheku
2024-05-24 01:36:45 UTC
Permalink
Post by Rainer Weikusat
/proc/sys/fs/protected_regular (since Linux 4.19)
1 Don't allow O_CREAT open(2) on regular files that the caller doesn't
own in world-writable sticky directories, unless the regular file is
owned by the owner of the directory.
That makes no sense. In the absence of O_EXCL, O_CREAT pertains to files
that don't exist. Files that don't exist do not have an owner.

If the file exists, and O_CREAT is specified, but not O_EXCL, then
O_CREAT means nothing.

(If the file doesn't exist, the caller will own it when it gets created,
so "that the caller doesn't own" will not apply.)
Post by Rainer Weikusat
2 As for the value 1, but the restriction also applies to
group-writable sticky directories.
The intent of the above protections is similar to protected_fifos, but
allows an application to avoid writes to an attacker-controlled regular
file, where the application expected to create one.
IF the application expects to create a file, it should specify
O_EXCL | O_CREAT. That will then fail if the file exists already.

If a file is attacker-controlled, that is independent of the
circumstances of its creation.

We can think about a protection mechanism that, in certain directories,
prevents a process from obtaining an open file descriptor to an
attacker-controlled file, regardless of whether it is being created.

Such a mechanism won't be looking at whether O_CREAT is present
in the open request, since it is bad to open an attacker-controlled
file whether creating it or not.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
Rainer Weikusat
2024-05-24 15:32:53 UTC
Permalink
Post by Kaz Kylheku
Post by Rainer Weikusat
/proc/sys/fs/protected_regular (since Linux 4.19)
1 Don't allow O_CREAT open(2) on regular files that the caller doesn't
own in world-writable sticky directories, unless the regular file is
owned by the owner of the directory.
That makes no sense. In the absence of O_EXCL, O_CREAT pertains to files
that don't exist. Files that don't exist do not have an owner.
If the file exists, and O_CREAT is specified, but not O_EXCL, then
O_CREAT means nothing.
This is obviously supposed to refer to the one use-case of O_CREAT the
guy who authored this could think of: Broken application trying to create a
temporary file in an insecure way. The application is broken as it would
need to use O_EXCL to achieve the intended semantics. This means this is
a workaround supposed to cause broken applications to fail in harmless
ways which happens to break correct application trying to cooperate with
other correct applications wrt using a partiuclar file.

It's basically a coded opinion statement: I the undersigned really think
O_CREAT should fail when the file already exists!

[...]
Post by Kaz Kylheku
Post by Rainer Weikusat
2 As for the value 1, but the restriction also applies to
group-writable sticky directories.
The intent of the above protections is similar to protected_fifos, but
allows an application to avoid writes to an attacker-controlled regular
file, where the application expected to create one.
[...]
Post by Kaz Kylheku
If a file is attacker-controlled, that is independent of the
circumstances of its creation.
We can think about a protection mechanism that, in certain directories,
prevents a process from obtaining an open file descriptor to an
attacker-controlled file, regardless of whether it is being created.
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet. And I certainly don't
think this justifies breaking working, correct code which makes use of
UNIX semantics some Google employee happens to dislike.

"We must do something about security! This is said to be something about
security. Therefore, we must do it!" ("Yes minister" paraphrase).
M***@dastardlyhq.com
2024-05-24 15:44:20 UTC
Permalink
On Fri, 24 May 2024 16:32:53 +0100
Post by Rainer Weikusat
Post by Kaz Kylheku
We can think about a protection mechanism that, in certain directories,
prevents a process from obtaining an open file descriptor to an
attacker-controlled file, regardless of whether it is being created.
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet. And I certainly don't
think this justifies breaking working, correct code which makes use of
UNIX semantics some Google employee happens to dislike.
Changing things that work because someone doesn't like them seems to be par for
the course these days. See systemd, wayland, pulse audio.

Some people can't or won't learn from what others have done before and think
they have some kind of magic insight into developing something better. Rarely
does that turn out to be the case.
Richard Kettlewell
2024-05-25 13:06:22 UTC
Permalink
Post by Rainer Weikusat
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet.
The typical example is an install script which downloads the main
installer to a predictable filename in /tmp and then executes it.
Broken, indeed, but nevertheless a recurring issue.

The attacker pre-creates the download file with 0777 permission and then
injects their own code into it; it will subsequently be run as the
victim user. There’s a race to win here, i.e. getting a modification in
before execution completes, but it seems to be easy in practice.

People presumably set protected_regular=0 if the attack scenario never
arises in their environment, or if they prefer to take the risk.
--
https://www.greenend.org.uk/rjk/
Rainer Weikusat
2024-05-26 11:08:00 UTC
Permalink
Post by Richard Kettlewell
Post by Rainer Weikusat
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet.
The typical example is an install script which downloads the main
installer to a predictable filename in /tmp and then executes it.
Broken, indeed, but nevertheless a recurring issue.
That's not really broken, only stupid. The broken bit is not using
O_EXCL to create the file. And the broken bit of the chokearound is
changing well-documented semantics of open flags with real uses cases
based on the assumption that O_EXCL must have been 'forgotten'.
Post by Richard Kettlewell
The attacker pre-creates the download file with 0777 permission and then
injects their own code into it; it will subsequently be run as the
victim user.
There’s a race to win here, i.e. getting a modification in
before execution completes, but it seems to be easy in practice.
There's another 'race' to win here, namely, it must somehow be known
that a broken install script is about to be executed in order to
'attack' anyone. And "download random crap from the internet and run it"
is a recipe for disaster, anyway: The 'install script' is already
running as "the victim user" and offers a much more direct route to
victimising someone.
Post by Richard Kettlewell
People presumably set protected_regular=0 if the attack scenario never
arises in their environment, or if they prefer to take the risk.
If people actually do that (going against system defaults is an uphill
battle), sooner or later, a second control file will appear with values

0 - pay attention to the setting in the original file
1 - pay attention to the setting in the original every 3rd Thursday
2 - break O_CREAT, correct code be fucked!

will appear.
Rainer Weikusat
2024-05-26 11:09:01 UTC
Permalink
Post by Richard Kettlewell
Post by Rainer Weikusat
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet.
The typical example is an install script which downloads the main
installer to a predictable filename in /tmp and then executes it.
Broken, indeed, but nevertheless a recurring issue.
That's not really broken, only stupid. The broken bit is not using
O_EXCL to create the file. And the broken bit of the chokearound is
changing well-documented semantics of open flags with real uses cases
based on the assumption that O_EXCL must have been 'forgotten'.
Post by Richard Kettlewell
The attacker pre-creates the download file with 0777 permission and then
injects their own code into it; it will subsequently be run as the
victim user.
There’s a race to win here, i.e. getting a modification in
before execution completes, but it seems to be easy in practice.
There's another 'race' to win here, namely, it must somehow be known
that a broken install script is about to be executed in order to
'attack' anyone. And "download random crap from the internet and run it"
is a recipe for disaster, anyway: The 'install script' is already
running as "the victim user" and offers a much more direct route to
victimising someone.
Post by Richard Kettlewell
People presumably set protected_regular=0 if the attack scenario never
arises in their environment, or if they prefer to take the risk.
If people actually do that (going against system defaults is an uphill
battle), sooner or later, a second control file will appear with values

0 - pay attention to the setting in the original file
1 - pay attention to the setting in the original fikle every 3rd Thursday
2 - break O_CREAT, correct code be fucked!

will appear (default again 2, obviously).
Richard Kettlewell
2024-05-27 08:19:30 UTC
Permalink
Post by Rainer Weikusat
Post by Richard Kettlewell
Post by Rainer Weikusat
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet.
The typical example is an install script which downloads the main
installer to a predictable filename in /tmp and then executes it.
Broken, indeed, but nevertheless a recurring issue.
That's not really broken, only stupid. The broken bit is not using
O_EXCL to create the file. And the broken bit of the chokearound is
changing well-documented semantics of open flags with real uses cases
based on the assumption that O_EXCL must have been 'forgotten'.
The adjective used to describe it isn’t the point. The point is that it
keeps happening.
Post by Rainer Weikusat
Post by Richard Kettlewell
The attacker pre-creates the download file with 0777 permission and
then injects their own code into it; it will subsequently be run as
the victim user.
There’s a race to win here, i.e. getting a modification in before
execution completes, but it seems to be easy in practice.
There's another 'race' to win here, namely, it must somehow be known
that a broken install script is about to be executed in order to
'attack' anyone.
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
--
https://www.greenend.org.uk/rjk/
Rainer Weikusat
2024-05-27 16:19:09 UTC
Permalink
Post by Richard Kettlewell
Post by Rainer Weikusat
Post by Richard Kettlewell
Post by Rainer Weikusat
This leads to another question: What on G*ds f***ing earth is "an
attacker controlled regular file" security-wise? It's not that regular
files do anything. They just store data. I can think of seriously
contrived issue here, namely, broken app tries to create a file with
restrictive permissions in order to write $super_secret_stuff to it and
accidentally writes it to a file readable by another user instead. If
that's a real issue, I haven't heard of it yet.
The typical example is an install script which downloads the main
installer to a predictable filename in /tmp and then executes it.
Broken, indeed, but nevertheless a recurring issue.
That's not really broken, only stupid. The broken bit is not using
O_EXCL to create the file. And the broken bit of the chokearound is
changing well-documented semantics of open flags with real uses cases
based on the assumption that O_EXCL must have been 'forgotten'.
The adjective used to describe it isn’t the point. The point is that it
keeps happening.
It doesn't "keep happening" as this is not a natural phenomenon. At
best, that's something (certain) people keep doing. I also very much
dispute that this is an actual phenomenon at all and not just some
theoretical justification someone came up with for breaking stuff.

To recapitulate

1, It is conjectured that "install scripts" exist which create
executable temporary files in /tmp incorrectly and idiotically, that is,
by neither using one of the functions for doing so securely (mkstemp,
tmpfile) nor the kernel facility for this (O_TMPFILE open flag) nor the
proper set of open flags (O_CREAT | O_EXCL) and by using /tmp instead of
a readily available directory that's not world-writable, eg, the home
directory of the user in question.

2. It is further conjectured that a hostile third party exists which can
observe another user running a particular broken script which downloads code to
such a file in order to replace the content of the file between the
close call of the broken script (necessary to avoid ETXTBSY errors) and
the following execve, thereby possibly leading to execution of code
provided by the hostile third party.

and because this is conjectured, it's considered ok to break working
correct code trying to open files shared with other application in group
writable directories with the sticky bit set by 'silently' changing the
semantics of O_CREAT to imply O_EXCL but with a different error code
(EACCES instead of EEXIST).

And the onus for coping with the change is - of course - not on the
people who want it because they believe it'll help them but on people
relying on standardized UNIX semantics which have been in place since
some time in the 1970s, ie, it's not "enable this behaviour if you think
that's a problem you'll need to deal with" but "disable this behaviour
after it has caused you grief".
Post by Richard Kettlewell
Post by Rainer Weikusat
Post by Richard Kettlewell
The attacker pre-creates the download file with 0777 permission and
then injects their own code into it; it will subsequently be run as
the victim user.
There’s a race to win here, i.e. getting a modification in before
execution completes, but it seems to be easy in practice.
There's another 'race' to win here, namely, it must somehow be known
that a broken install script is about to be executed in order to
'attack' anyone.
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
And a practical example of this is? And when was it last used for an
actual exploit?
John Ames
2024-05-28 14:57:07 UTC
Permalink
On Mon, 27 May 2024 09:19:30 +0100
Post by Richard Kettlewell
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
Kaz Kylheku
2024-05-28 16:22:03 UTC
Permalink
Post by John Ames
On Mon, 27 May 2024 09:19:30 +0100
Post by Richard Kettlewell
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
And when something doesn't work, alll that is left is its risks
and downsides.
--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @***@mstdn.ca
John Ames
2024-05-28 17:34:38 UTC
Permalink
On Tue, 28 May 2024 07:57:07 -0700
Post by John Ames
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
(On top of which, if the concern really boils down to "filename
collisions in /tmp are exploitable," it seems like it'd be *vastly*
less hazard-prone to come up with some mechanism for making /tmp an
alias for a user-specific mount point, compared to changing behavior of
fundamental system calls!

At the very least, if /tmp isn't mounted as noexec, you should probably
be asking yourself if there's any reason it *couldn't* be.)
Rainer Weikusat
2024-05-28 20:44:48 UTC
Permalink
Post by John Ames
On Tue, 28 May 2024 07:57:07 -0700
Post by John Ames
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
(On top of which, if the concern really boils down to "filename
collisions in /tmp are exploitable,"
It's more like "Programs which wrongly assume they have exclusive
control over the filesystem name space, ie, which - due to lazinness or
incompetence of their resepective authors - fail to use the abundantly
availabe mechanisms for ensuring they exclusivley control 'their' names,
can be exploited."

And the workaround for this is "Define a heuristic in order classify
requests for non-exclusive access to file system names as likely
erronoeous and force them to fail based on this".

In it's most extreme form, this heuristic is "Directory is writable by
someone other than the directory owner, has the sticky bit set and the
accessed name isn't owned by the user the accessing process is running
as".

In my case, the directory in question is the top-level of a substree
used to provide an (OpenSSL-based) certificate authority. Day-to-day
operation of this CA, ie, issueing of certificates, are supposed to be
performed by a dedicated, unprivileged user allowed to create files in
the CA hierarchy by virtue of belonging the dedicated group created for
this purpose. But this user is not supposed to be allowed to remove
files of other users from the hierarchy (the top-level directory is
marked as sticky). A certain file in this hierarchy belonging to the
the user in question is used as (fcntl-based) lockfile to serialize
executions of a particular script. And the program trying to acquire
this lock tries to open lock files for exclusive access with O_RDWR |
O_CREAT. The idea behind this (not used here) is that such lockfiles can
be created on-the-fly. Assuming the file doesn't initially exist,
concurrent attempts to acquire the lock will cause the one which happens
to get to this first to create the file and all subsequent ones then
just open the already created file.

Unfortunately, this falls foul of the heuristic described above despite
it's not vulnerable to the supposed exploit.
M***@dastardlyhq.com
2024-05-29 07:10:32 UTC
Permalink
On Tue, 28 May 2024 21:44:48 +0100
Post by Rainer Weikusat
Post by John Ames
On Tue, 28 May 2024 07:57:07 -0700
Post by John Ames
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
(On top of which, if the concern really boils down to "filename
collisions in /tmp are exploitable,"
It's more like "Programs which wrongly assume they have exclusive
control over the filesystem name space, ie, which - due to lazinness or
incompetence of their resepective authors - fail to use the abundantly
availabe mechanisms for ensuring they exclusivley control 'their' names,
can be exploited."
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
Nicolas George
2024-05-29 15:39:08 UTC
Permalink
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
Scott Lurndal
2024-05-29 16:06:20 UTC
Permalink
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}

Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
Rainer Weikusat
2024-05-29 17:12:20 UTC
Permalink
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
M***@dastardlyhq.com
2024-05-29 17:28:23 UTC
Permalink
On Wed, 29 May 2024 18:12:20 +0100
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
Scott Lurndal
2024-05-29 17:32:06 UTC
Permalink
Post by M***@dastardlyhq.com
On Wed, 29 May 2024 18:12:20 +0100
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
$ man 1 mktemp
M***@dastardlyhq.com
2024-05-29 17:43:46 UTC
Permalink
On Wed, 29 May 2024 17:32:06 GMT
Post by Scott Lurndal
Post by M***@dastardlyhq.com
On Wed, 29 May 2024 18:12:20 +0100
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to
share
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
$ man 1 mktemp
No thanks, particularly if NFS is involved anywhere.
Rainer Weikusat
2024-05-29 20:42:47 UTC
Permalink
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
BASE64-encoding a random number would also work. But I didn't claim this
would be particularly complicated, just that it needed to be done.
M***@dastardlyhq.com
2024-05-31 07:26:41 UTC
Permalink
On Wed, 29 May 2024 21:42:47 +0100
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to
share
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
BASE64-encoding a random number would also work. But I didn't claim this
would be particularly complicated, just that it needed to be done.
Always the danger of a collision if both programs seeded the sequence with
the same value, albeit very small.
Rainer Weikusat
2024-05-31 14:23:35 UTC
Permalink
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to
share
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
BASE64-encoding a random number would also work. But I didn't claim this
would be particularly complicated, just that it needed to be done.
Always the danger of a collision if both programs seeded the sequence with
the same value, albeit very small.
Regardless of the scheme that's employed, there's always a danger of
collisions, ie, of some other processing having created a file with this
name first.
M***@dastardlyhq.com
2024-05-31 14:39:09 UTC
Permalink
On Fri, 31 May 2024 15:23:35 +0100
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to
share
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
BASE64-encoding a random number would also work. But I didn't claim this
would be particularly complicated, just that it needed to be done.
Always the danger of a collision if both programs seeded the sequence with
the same value, albeit very small.
Regardless of the scheme that's employed, there's always a danger of
collisions, ie, of some other processing having created a file with this
name first.
After a program has exited it has no claim to any temporary files so thats
a non issue.
Rainer Weikusat
2024-05-31 21:18:16 UTC
Permalink
Post by M***@dastardlyhq.com
On Fri, 31 May 2024 15:23:35 +0100
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to
share
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
Process id + time and date down to the microsecond in the filename usually
solves that problem. If multi threaded then throw in the thread id too.
Creating unique filenames isn't a problem so long as everyone sticks to the
scheme.
BASE64-encoding a random number would also work. But I didn't claim this
would be particularly complicated, just that it needed to be done.
Always the danger of a collision if both programs seeded the sequence with
the same value, albeit very small.
Regardless of the scheme that's employed, there's always a danger of
collisions, ie, of some other processing having created a file with this
name first.
After a program has exited it has no claim to any temporary files so thats
a non issue.
It's an issue because a program may intentionally create files (or
symlinks) with names another program will likely try to use for a
temporary file in the near future. The same may happen for less
nefarious reasons because "shit happened", which it's wont to do.

In the general case, programs creating files in world-writable
directories need to handle the situation that another program might try
to use the same filename at the same time.
M***@dastardlyhq.com
2024-06-01 08:36:06 UTC
Permalink
On Fri, 31 May 2024 22:18:16 +0100
Post by Rainer Weikusat
Post by M***@dastardlyhq.com
After a program has exited it has no claim to any temporary files so thats
a non issue.
It's an issue because a program may intentionally create files (or
symlinks) with names another program will likely try to use for a
temporary file in the near future. The same may happen for less
nefarious reasons because "shit happened", which it's wont to do.
Why would a program try to use some random shit in a file instead of deleting
the file first or simply opening it with "w" in fopen() or O_TRUNC ?
Post by Rainer Weikusat
In the general case, programs creating files in world-writable
directories need to handle the situation that another program might try
to use the same filename at the same time.
Which is why you give your file a unique name. However there is no 100%
fullproof solution to this issue so if you're worried about it don't write
to a world writable directory.
Rainer Weikusat
2024-06-02 19:27:53 UTC
Permalink
Post by M***@dastardlyhq.com
On Fri, 31 May 2024 22:18:16 +0100
[...]
Post by M***@dastardlyhq.com
Post by Rainer Weikusat
In the general case, programs creating files in world-writable
directories need to handle the situation that another program might try
to use the same filename at the same time.
Which is why you give your file a unique name. However there is no 100%
fullproof solution to this issue so if you're worried about it don't write
to a world writable directory.
There is no way to give files names which are guaranteed to be unique
unless the set of all program ever running on a given system and they
they're going to use the file system is known. And that's still just for
the ordinary case, ie, without hostile third parties trying to 'exploit'
something.
Keith Thompson
2024-05-29 23:22:43 UTC
Permalink
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
`mkdir -p` solves that part of the problem. But what if somebody else
has created a directory whose name happens to match your ${LOGNAME}?
It's a convention that works only if everyone follows it.

As Scott Lurndal points out, this is what mktemp(1) is for.

Also, I don't have $TMPDIR in my environment.

(As for $LOGNAME, I usually use $USER, but I see that POSIX only
specifies $LOGNAME so that's probably safer.)
--
Keith Thompson (The_Other_Keith) Keith.S.Thompson+***@gmail.com
void Void(void) { Void(); } /* The recursive call of the void */
Rainer Weikusat
2024-05-30 11:53:36 UTC
Permalink
Post by Keith Thompson
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
`mkdir -p` solves that part of the problem.
That's not a problem. It's an important featue of a partial solution.
Post by Keith Thompson
But what if somebody else
has created a directory whose name happens to match your ${LOGNAME}?
It's a convention that works only if everyone follows it.
As Scott Lurndal points out, this is what mktemp(1) is for.
Or mkstemp or tmpfile. On Linux (since 3.11), there's also an O_TMPFILE
open flag which creates a namless open file in some directory (passed as
pathname argument to open). It's also not really complicated to do this
with plain open:

1. Generate a 'hard to guess' name
2. Try opening with O_CREAT | O_EXCL
3. Success? => return fd
4. goto 1
Scott Lurndal
2024-05-30 13:55:57 UTC
Permalink
Post by Rainer Weikusat
Post by Keith Thompson
Post by Rainer Weikusat
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
Note that /tmp and /var/tmp usually have the "Sticky" mode bit
set which limits the operations that a non-owner can
perform on a file in that directory.
This solves only half of the problem: mkdir will fail if the given
filesystem name already exists. Some scheme to create unguessable names
and try using them until success would still be needed on top of that.
`mkdir -p` solves that part of the problem.
That's not a problem. It's an important featue of a partial solution.
Post by Keith Thompson
But what if somebody else
has created a directory whose name happens to match your ${LOGNAME}?
It's a convention that works only if everyone follows it.
As Scott Lurndal points out, this is what mktemp(1) is for.
Or mkstemp or tmpfile. On Linux (since 3.11), there's also an O_TMPFILE
open flag which creates a namless open file in some directory (passed as
pathname argument to open). It's also not really complicated to do this
1. Generate a 'hard to guess' name
2. Try opening with O_CREAT | O_EXCL
3. Success? => return fd
4. goto 1
0. Use the mkdirat(2) system call rather than mkdir(2) when
creating the subdirectory in ${TMPDIR:-/tmp}. It may
be that the library functions backing mktemp et al already do
this...
James Kuyper
2024-05-29 18:14:35 UTC
Permalink
Post by Scott Lurndal
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
$ mkdir ${TMPDIR}/${LOGNAME} && chown 1700 ${TMPDIR}/${LOGNAME}
That doesn't answer the question, it just moves it. With this
suggestion, the original question is equivalent to asking "what should
TMPDIR be set to?"
M***@dastardlyhq.com
2024-05-29 17:13:45 UTC
Permalink
On 29 May 2024 15:39:08 GMT
Post by Nicolas George
Post by M***@dastardlyhq.com
The simple answer being that no process uses /tmp unless it needs to share
data with another via files.
So where should they put their temporary files?
Create a dot directory in the users home dir and use that. Which is what
a lot of applications have done for years. Obviously if the user has a quota
but the app wants to create gigabyte sized files then another approach is
needed, but for small files that arn't around for long it works fine.
Nicolas George
2024-05-29 21:48:57 UTC
Permalink
Post by M***@dastardlyhq.com
Create a dot directory in the users home dir
So you suggest to use for temporary files a place where space might be
limited, and/or slow, and/or with expensive writes?

That's rather bad design.
M***@dastardlyhq.com
2024-05-31 07:27:53 UTC
Permalink
On 29 May 2024 21:48:57 GMT
Post by Nicolas George
Post by M***@dastardlyhq.com
Create a dot directory in the users home dir
So you suggest to use for temporary files a place where space might be
limited, and/or slow, and/or with expensive writes?
That's rather bad design.
You conveniently snipped the bit where I mentioned quotas. However you might
want to take a look in your home directory at all the . files (and some not)
and look at what various desktop apps dump there. Eg Various browsers save
megabytes of cache data.
M***@dastardlyhq.com
2024-05-31 07:43:07 UTC
Permalink
On Fri, 31 May 2024 07:27:53 -0000 (UTC)
Post by M***@dastardlyhq.com
On 29 May 2024 21:48:57 GMT
Post by Nicolas George
Post by M***@dastardlyhq.com
Create a dot directory in the users home dir
So you suggest to use for temporary files a place where space might be
limited, and/or slow, and/or with expensive writes?
That's rather bad design.
You conveniently snipped the bit where I mentioned quotas. However you might
want to take a look in your home directory at all the . files (and some not)
and look at what various desktop apps dump there. Eg Various browsers save
megabytes of cache data.
I meant . directories obv.
Nicolas George
2024-05-31 08:59:37 UTC
Permalink
Post by M***@dastardlyhq.com
You conveniently snipped the bit where I mentioned quotas.
Indeed: it did not make your suggestion better.
M***@dastardlyhq.com
2024-05-31 10:57:47 UTC
Permalink
On 31 May 2024 08:59:37 GMT
Post by Nicolas George
Post by M***@dastardlyhq.com
You conveniently snipped the bit where I mentioned quotas.
Indeed: it did not make your suggestion better.
My suggestion is how a lot of modern applications work whether you like it or
not.
Nicolas George
2024-05-31 13:15:52 UTC
Permalink
Post by M***@dastardlyhq.com
My suggestion is how a lot of modern applications work
Not the endorsement you think it is.
John Ames
2024-05-31 15:01:20 UTC
Permalink
On 31 May 2024 13:15:52 GMT
Post by Nicolas George
Post by M***@dastardlyhq.com
My suggestion is how a lot of modern applications work
Not the endorsement you think it is.
Also not as true as all that, to begin with. While e.g. Chrome does
dump an irritating amount of junk in one's home directory, it's stuff
that has at least some reason for being persistent; it still uses /tmp
for genuinely transient stuff. Can attest, just had an errant program
fill up /tmp on me the other day, and Chromium wouldn't open 'til I
cleared it.
Richard Kettlewell
2024-05-29 07:50:02 UTC
Permalink
Post by John Ames
Post by Richard Kettlewell
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
It’s easy to think of counterexamples: collision assist in a car, for
instance, or automatic braking systems in trains.
--
https://www.greenend.org.uk/rjk/
John Ames
2024-05-29 14:44:26 UTC
Permalink
On Wed, 29 May 2024 08:50:02 +0100
Post by Richard Kettlewell
Post by John Ames
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
It’s easy to think of counterexamples: collision assist in a car, for
instance, or automatic braking systems in trains.
Minimal-assistance-in-the-absence-of-direct-supervision is a *very*
different thing from intelligent-override-of-deliberate-action.
Richard Kettlewell
2024-05-29 16:20:08 UTC
Permalink
Post by John Ames
Post by Richard Kettlewell
Post by John Ames
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
It’s easy to think of counterexamples: collision assist in a car, for
instance, or automatic braking systems in trains.
Minimal-assistance-in-the-absence-of-direct-supervision is a *very*
different thing from intelligent-override-of-deliberate-action.
I think some automated braking for goalposts is needed here.
--
https://www.greenend.org.uk/rjk/
John Ames
2024-05-29 20:05:03 UTC
Permalink
On Wed, 29 May 2024 17:20:08 +0100
Post by Richard Kettlewell
Post by John Ames
Minimal-assistance-in-the-absence-of-direct-supervision is a *very*
different thing from intelligent-override-of-deliberate-action.
I think some automated braking for goalposts is needed here.
Let me come back at my point, then: security precautions predicated on
the assumption that an Authorized Person is not operating in good faith
and due diligence concordant with their level of authorization suffer
from severe diminishing returns with each additional layer of safeguard
and may even reach a point where they're *counterproductive.*

Restricting the ability to install software to a designated group of
administrators, f'rexample, is a perfectly sensible thing to do,
especially in a large shared system. But once you've created such a
group, assessing whether a given individual is trustworthy and
knowledgeable enough to be a member of it is an organizational problem,
*not* a technical one.

Some measure of safeguarding may still be worth the bother despite this
(while Windows's "hey, you got this thing off the Interwebs, you sure
'sokay?" prompts annoy me personally, I can see the logic behind them,)
but every additional layer calls further into question what the point
of even having the group is in the first place, and whether *anyone*
should really be a member of it. If not, why did you let them in? If
so, why are you hampering their ability to exercise that authority?

This has become semi-famously an issue in medical software, for
instance. Developers of some major EMR systems, driven by regulatory
and legal CYA concerns, have put so many are-you-sure and please-
acknowledge-XYZ prompts into their core workflow that doctors (who are,
shockingly, only human) end up clicking through them machine-gun style,
and pay *less* attention to the actually critical stuff than they
would've if it weren't buried in an avalanche of white noise.

And doctors aren't software specialists; sysadmins *are.* Throw too
many safeguards in the path of a sysadmin just trying to get shit done,
and you don't end up with a sysadmin forced to question whether they
really *are* sure, or even a sysadmin powering doctor-style through a
set of nagging little obstacles they aren't thinking about; you end up
with a sysadmin who will - guaranteed - develop a process of *disabling
the entire safeguard system* for the duration of the job, and then (if
you're lucky) re-enabling it afterward.

We saw exactly that with UAC in Windows Vista; it was so bad that the #1
method for dealing with it was to disable it entirely, and people were
so burned by it that that remained true *well* into the Win10 era,
despite MS's attempts to temper it into some measure of reasonability.

So no, I don't think that a couple examples of minimal assistance in
situations where direct supervision is either absent or impaired *do*
constitute a hard-and-fast argument against temperance in security
design as a general principle. Sometimes a little extra safety is worth
it, sometimes not; and sometimes you're actually shooting yourself in
the foot.

And if I had to place my bets on where changing the behavior of
fundamental system calls in a major OS family where they've been
standard since the '70s lies on that spectrum...well, it's not gonna be
in the "worth it" zone.
Richard Kettlewell
2024-06-01 08:26:47 UTC
Permalink
Post by John Ames
Post by Richard Kettlewell
Post by John Ames
Minimal-assistance-in-the-absence-of-direct-supervision is a *very*
different thing from intelligent-override-of-deliberate-action.
I think some automated braking for goalposts is needed here.
Let me come back at my point, then: security precautions predicated on
the assumption that an Authorized Person is not operating in good faith
and due diligence concordant with their level of authorization suffer
from severe diminishing returns with each additional layer of safeguard
and may even reach a point where they're *counterproductive.*
They might. But, also, any assumption that people are always going to
(1) know how to correctly use the system in front of them and (2) get
all the details right every time, is going to be violated from time to
time.

I think it’s worth introducing systematic mitigations for those risks,
if that can be done without undue impact.
Post by John Ames
Restricting the ability to install software to a designated group of
administrators, f'rexample, is a perfectly sensible thing to do,
especially in a large shared system.
Agreed.
Post by John Ames
But once you've created such a group, assessing whether a given
individual is trustworthy and knowledgeable enough to be a member of
it is an organizational problem, *not* a technical one.
Agreed that the assessment is an organizational problem. But I’d hope
you’d agree that:

1) Even the best-run organization will get it wrong from time to time.
2) Not every organiation is particularly well run.
3) Trustworthy, knowledgeable individuals still make mistakes.
Post by John Ames
Some measure of safeguarding may still be worth the bother despite this
(while Windows's "hey, you got this thing off the Interwebs, you sure
'sokay?" prompts annoy me personally, I can see the logic behind them,)
but every additional layer calls further into question what the point
of even having the group is in the first place, and whether *anyone*
should really be a member of it. If not, why did you let them in? If
so, why are you hampering their ability to exercise that authority?
This has become semi-famously an issue in medical software, for
instance. Developers of some major EMR systems, driven by regulatory
and legal CYA concerns, have put so many are-you-sure and please-
acknowledge-XYZ prompts into their core workflow that doctors (who are,
shockingly, only human) end up clicking through them machine-gun style,
and pay *less* attention to the actually critical stuff than they
would've if it weren't buried in an avalanche of white noise.
Agreed that excessive are-you-sure prompts have a high risk of training
their users to blindly click through.

But in the case of protected_regular there’s nothing to click through
and for the most part no-one ever notices any difference unless under
attack[1], so I don’t think that the are-you-sure prompts are a good
comparator.

[1] and in the few niche cases where someone does, they can think about
the tradeoffs and either disable it or work around it.
Post by John Ames
And doctors aren't software specialists; sysadmins *are.* Throw too
many safeguards in the path of a sysadmin just trying to get shit done,
and you don't end up with a sysadmin forced to question whether they
really *are* sure, or even a sysadmin powering doctor-style through a
set of nagging little obstacles they aren't thinking about; you end up
with a sysadmin who will - guaranteed - develop a process of *disabling
the entire safeguard system* for the duration of the job, and then (if
you're lucky) re-enabling it afterward.
We saw exactly that with UAC in Windows Vista; it was so bad that the #1
method for dealing with it was to disable it entirely, and people were
so burned by it that that remained true *well* into the Win10 era,
despite MS's attempts to temper it into some measure of reasonability.
So no, I don't think that a couple examples of minimal assistance in
situations where direct supervision is either absent or impaired *do*
constitute a hard-and-fast argument against temperance in security
design as a general principle. Sometimes a little extra safety is worth
it, sometimes not; and sometimes you're actually shooting yourself in
the foot.
I think collision assist is really good model for protected_regular.

The potential collision might be due to the driver’s occasional
inattention (analogy: sysadmin writes scripts most days, but mis-handles
filename spoofing risks occasionally) or it might be someone else’s
error (analogy: insecure download+install script).

The collision assist doesn’t prevent normal driving, it only activates
when things are about to go seriously wrong[2]; for almost everybody the
same is true of protected_regular, it only blocks anything when an
attack is underway.

[2] It may impede crash testing, but presumably if you’re doing that
you’re a manufacturer or evaluator and have the tools to disable it.
--
https://www.greenend.org.uk/rjk/
Rainer Weikusat
2024-06-01 14:42:52 UTC
Permalink
Richard Kettlewell <***@invalid.invalid> writes:

[...]
Post by Richard Kettlewell
I think collision assist is really good model for protected_regular.
The potential collision might be due to the driver’s occasional
inattention (analogy: sysadmin writes scripts most days, but mis-handles
filename spoofing risks occasionally) or it might be someone else’s
error (analogy: insecure download+install script).
The collision assist doesn’t prevent normal driving, it only activates
when things are about to go seriously wrong[2]; for almost everybody the
same is true of protected_regular, it only blocks anything when an
attack is underway.
protected_regular does prevent "normal driving" and it decidedly
'activates' in situations where nothing untoward is underway. I've
described an example of that.

In absence of a specific example (I've asked for but didn't receive a
reply), I also don't think that 'protection' against random stuff Google
employees can dream up is a great asset.
Richard Kettlewell
2024-06-01 17:50:58 UTC
Permalink
Post by Rainer Weikusat
Post by Richard Kettlewell
I think collision assist is really good model for protected_regular.
The potential collision might be due to the driver’s occasional
inattention (analogy: sysadmin writes scripts most days, but mis-handles
filename spoofing risks occasionally) or it might be someone else’s
error (analogy: insecure download+install script).
The collision assist doesn’t prevent normal driving, it only activates
when things are about to go seriously wrong[2]; for almost everybody the
same is true of protected_regular, it only blocks anything when an
attack is underway.
protected_regular does prevent "normal driving" and it decidedly
'activates' in situations where nothing untoward is underway. I've
described an example of that.
You’re not in the “almost everybody”.
--
https://www.greenend.org.uk/rjk/
Rainer Weikusat
2024-06-02 19:22:55 UTC
Permalink
Post by Richard Kettlewell
Post by Rainer Weikusat
Post by Richard Kettlewell
I think collision assist is really good model for protected_regular.
The potential collision might be due to the driver’s occasional
inattention (analogy: sysadmin writes scripts most days, but mis-handles
filename spoofing risks occasionally) or it might be someone else’s
error (analogy: insecure download+install script).
The collision assist doesn’t prevent normal driving, it only activates
when things are about to go seriously wrong[2]; for almost everybody the
same is true of protected_regular, it only blocks anything when an
attack is underway.
protected_regular does prevent "normal driving" and it decidedly
'activates' in situations where nothing untoward is underway. I've
described an example of that.
You’re not in the “almost everybody”.
There's nothing special about the technical situation I described. It's
reallyv completely everyday "best pratice" stuff: Run stuff as
unprivileged user where possible, use standard file system features to
limit access rights to anything to the necessary minimum.
John Ames
2024-06-03 15:17:13 UTC
Permalink
On Sat, 01 Jun 2024 18:50:58 +0100
Post by Richard Kettlewell
You’re not in the “almost everybody”.
He's also no true Scotsman.

Kaz Kylheku
2024-05-29 16:30:22 UTC
Permalink
Post by Richard Kettlewell
Post by John Ames
Post by Richard Kettlewell
Easy enough to arrange in a shared environment. Find a bit of software
relevant to the attacker’s job (or the job of whoever’s unprivileged
login they’ve compromised) and ask the IT department to install it.
That's a social-engineering problem, not a software-engineering one.
Trying to solve "people in positions of responsibility aren't being
responsible" by technical means has never worked and *will* never work.
It’s easy to think of counterexamples: collision assist in a car, for
instance, or automatic braking systems in trains.
Reacting to an emergency event in a millisecond isn't a social
engineering problem of someone being irresponsible. It's something
only technology can do.

Accidents happen to the responsible. You're not irresponsible on account
of trusting that the opposite vehicle will not abruptly cross the median
line into your path in the last moment of the approach. Yet, such a
thing happens in the world.
Loading...