Discussion:
Forgotten <netinet/in.h>: shows up on Android!?
(too old to reply)
Kaz Kylheku
2020-12-24 00:31:49 UTC
Permalink
I have a program which (including its networking module) builds on
a bunch of platforms: GNU/Linuxes wih glibc, GNU/Linuxes with musl,
Solaris, Darwin, Cygwin.

I tried building it Android against Android's Bionic C library.

There were errors about types like in_addr not being defined.
I'm like, what? Those are in <netinet/in.h>, since the dawn of time.

I look at the #includes in the code, and, incredibly, there is no
#include <netinet/in.h> to be seen.

Yet, somehow, not a problem on a bunch of platforms.
--
TXR Programming Language: http://nongnu.org/txr
Kenny McCormack
2020-12-24 12:31:39 UTC
Permalink
Post by Kaz Kylheku
I have a program which (including its networking module) builds on
a bunch of platforms: GNU/Linuxes wih glibc, GNU/Linuxes with musl,
Solaris, Darwin, Cygwin.
I tried building it Android against Android's Bionic C library.
There were errors about types like in_addr not being defined.
I'm like, what? Those are in <netinet/in.h>, since the dawn of time.
I look at the #includes in the code, and, incredibly, there is no
#include <netinet/in.h> to be seen.
Yet, somehow, not a problem on a bunch of platforms.
Hard to tell exactly what you're asking - and given that you are not a
newbie, I'm trying to avoid assuming you're asking a typical newbie
question.

That said, if what you're asking is: Why does it compile OK on "normal"
platforms, but fails on Android?

Then the answer is that (AFAICT), you're not supposed to include
netinet/in.h directly from user code. Instead, you include (e.g.)
sys/types.h and sys/socket.h and they do the right thing (eventually
including the netinet stuff as needed).

That the Android build environment doesn't take care of these details seems
to indicate that that build environment is broken.
--
After Using Gender Slur Against AOC, GOP Rep. Yoyo Won't Apologize 'For Loving God'.

That's so sweet...
Richard Kettlewell
2020-12-24 13:22:11 UTC
Permalink
Post by Kenny McCormack
I have a program which (including its networking module) builds on a
bunch of platforms: GNU/Linuxes wih glibc, GNU/Linuxes with musl,
Solaris, Darwin, Cygwin.
I tried building it Android against Android's Bionic C library.
There were errors about types like in_addr not being defined.
I'm like, what? Those are in <netinet/in.h>, since the dawn of time.
I look at the #includes in the code, and, incredibly, there is no
#include <netinet/in.h> to be seen.
Yet, somehow, not a problem on a bunch of platforms.
Hard to tell exactly what you're asking - and given that you are not a
newbie, I'm trying to avoid assuming you're asking a typical newbie
question.
That said, if what you're asking is: Why does it compile OK on "normal"
platforms, but fails on Android?
Then the answer is that (AFAICT), you're not supposed to include
netinet/in.h directly from user code. Instead, you include (e.g.)
sys/types.h and sys/socket.h and they do the right thing (eventually
including the netinet stuff as needed).
You have it backwards. You are supposed to include it directly, see SUS
or a man page (e.g. man inet_addr on Linux). What’s going on is that
many, but not all, platforms also include it from other headers, meaning
that code that builds on those platforms doesn’t build on Kaz’s current
platform.
--
https://www.greenend.org.uk/rjk/
Geoff Clare
2020-12-24 14:03:11 UTC
Permalink
Post by Kenny McCormack
Post by Kaz Kylheku
I have a program which (including its networking module) builds on
a bunch of platforms: GNU/Linuxes wih glibc, GNU/Linuxes with musl,
Solaris, Darwin, Cygwin.
I tried building it Android against Android's Bionic C library.
There were errors about types like in_addr not being defined.
I'm like, what? Those are in <netinet/in.h>, since the dawn of time.
I look at the #includes in the code, and, incredibly, there is no
#include <netinet/in.h> to be seen.
Yet, somehow, not a problem on a bunch of platforms.
POSIX requires <arpa/inet.h> to define in_addr, in_addr_t, and
in_port_t "as described in <netinet/in.h>". So if the code included
that header, there should not be errors reported for any of those.

If errors were reported for other things in <netinet/in.h>, then it may
be that the systems where there was no problem all do a #include of
<netinet/in.h> in <arpa/inet.h>. (This is explicitly allowed by POSIX.)
Post by Kenny McCormack
Hard to tell exactly what you're asking - and given that you are not a
newbie, I'm trying to avoid assuming you're asking a typical newbie
question.
That said, if what you're asking is: Why does it compile OK on "normal"
platforms, but fails on Android?
Then the answer is that (AFAICT), you're not supposed to include
netinet/in.h directly from user code.
If it wasn't supposed to be included directly from user code, it wouldn't
be in POSIX.
Post by Kenny McCormack
Instead, you include (e.g.)
sys/types.h and sys/socket.h and they do the right thing (eventually
including the netinet stuff as needed).
POSIX doesn't allow the symbols from <netinet/in.h> to be made visible
by <sys/types.h> or <sys/socket.h>. Only <arpa/inet.h> and <netdb.h>
are allowed to do that.
--
Geoff Clare <***@gclare.org.uk>
Kaz Kylheku
2020-12-24 17:03:53 UTC
Permalink
Post by Kenny McCormack
Post by Kaz Kylheku
I have a program which (including its networking module) builds on
a bunch of platforms: GNU/Linuxes wih glibc, GNU/Linuxes with musl,
Solaris, Darwin, Cygwin.
I tried building it Android against Android's Bionic C library.
There were errors about types like in_addr not being defined.
I'm like, what? Those are in <netinet/in.h>, since the dawn of time.
I look at the #includes in the code, and, incredibly, there is no
#include <netinet/in.h> to be seen.
Yet, somehow, not a problem on a bunch of platforms.
Hard to tell exactly what you're asking - and given that you are not a
newbie, I'm trying to avoid assuming you're asking a typical newbie
question.
That said, if what you're asking is: Why does it compile OK on "normal"
platforms, but fails on Android?
Then the answer is that (AFAICT), you're not supposed to include
netinet/in.h directly from user code. Instead, you include (e.g.)
sys/types.h and sys/socket.h and they do the right thing (eventually
including the netinet stuff as needed).
That's an interesting hypothesis, and the behavior of the platforms
where I didn't include that header corroborates it.

However, <netinet/in.h> is actually a documented POSIX header,
which refutes that part of the hypothesis which claims that it's an
internal header not to be directly included.

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/netinet_in.h.html
Post by Kenny McCormack
That the Android build environment doesn't take care of these details seems
to indicate that that build environment is broken.
Or is it that the Android headers actually conform to POSIX more
closely, and define the things that they are supposed to define, without
defining things that other headers are supposed to define?

Look at this text under the description of <netdb.h>:

"The <netdb.h> header may define the in_port_t type and the in_addr_t
type as described in <netinet/in.h>."

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/netdb.h.html

It specifically doesn't say "<netdb.h> is free to define struct in_addr
and other random stuff from <netinet/in.h>". Only specific redundancy is
allowed between headers.
--
TXR Programming Language: http://nongnu.org/txr
Philip Guenther
2020-12-25 05:38:43 UTC
Permalink
On Thursday, December 24, 2020 at 9:04:00 AM UTC-8, Kaz Kylheku wrote:
...
Post by Kaz Kylheku
"The <netdb.h> header may define the in_port_t type and the in_addr_t
type as described in <netinet/in.h>."
https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/netdb.h.html
It specifically doesn't say "<netdb.h> is free to define struct in_addr
and other random stuff from <netinet/in.h>". Only specific redundancy is
allowed between headers.
True, though for several of them the allowed redundancy is complete. If you scroll further down on that netdb.h.html page, you'll find this text:
"Inclusion of the <netdb.h> header may also make visible all symbols
from <netinet/in.h>, <sys/socket.h>, and <inttypes.h>."

However, if we go back to your original report, it's that "There were errors about types like in_addr not being defined." If they weren't defined, then _none_ of the headers that are required to provide them were included. So it's not just that, say, <netdb.h> was #included while <netinet/in.h> wasn't, because <netdb.h> is also required to provide in_addr_t.

So, if the headers of both systems are actually being used in a POSIX conforming mode (e.g., correct compiler flags), then this should not of occurred. But it did. That suggests one or both is not actually being used in a conforming mode.

On glibc systems, <features.h> has a long comment describing which pre-processor symbols enable various modes of compliance. After going through which do what, it has this statement:
"If none of these are defined, the default is to have _SVID_SOURCE,
_BSD_SOURCE, and _POSIX_SOURCE set to one and _POSIX_C_SOURCE set to
200112L."

_SVID_SOURCE and _BSD_SOURCE enable headers features which are not POSIX compliant, possibly pulling in <netinet/in.h> where POSIX wouldn't permit.

Most likely, however, is that the application itself isn't actually POSIX compliant, using headers which are neither defined by POSIX nor provided by the application, which puts it at the mercy of the system. For example, the glibc <resolv.h> pulls in <netinet/in.h> but at least some BSD-derived versions do _not_, requiring the application to pull it before including <resolv.h>.

The easiest way to track this down is probably to take the file which fails to compile on bionic, and compile it on Linux but passing the compiler the -dD -E options instead of -c, so you can see exactly what is being defined or provided where and trace back what is pulling in <netinet/in.h>

The 'solution' is almost guaranteed to be "just add #include <netinet/in.h> to the code" but I'm 100% on-board with understanding _how_ this non-portability occurs, so that it's easier to prevent and fix in the future.


Philip Guenther
Kaz Kylheku
2020-12-25 08:36:26 UTC
Permalink
Post by Philip Guenther
...
Post by Kaz Kylheku
"The <netdb.h> header may define the in_port_t type and the in_addr_t
type as described in <netinet/in.h>."
https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/netdb.h.html
It specifically doesn't say "<netdb.h> is free to define struct in_addr
and other random stuff from <netinet/in.h>". Only specific redundancy is
allowed between headers.
"Inclusion of the <netdb.h> header may also make visible all symbols
from <netinet/in.h>, <sys/socket.h>, and <inttypes.h>."
This kind of "may" stuff bugs me in specs; it just lets you get away
with writing nonportable code that breaks in another system.

The proper way to understand the above freedom is that just because you
didn't include <sys/socket.h>, don't think that you can write some macro
that clashes with that header, because <netdb.h> may also include it.

I.e. there is a threat that addition material may be present in the
translation unit tha could clause a clash---but also, don't depend on
it.
Post by Philip Guenther
However, if we go back to your original report, it's that "There were
errors about types like in_addr not being defined." If they weren't
defined, then _none_ of the headers that are required to provide them
were included. So it's not just that, say, <netdb.h> was #included
while <netinet/in.h> wasn't, because <netdb.h> is also required to
provide in_addr_t.
I should have more correctly said "struct in_addr". It is not inaddr_t.

I think struct in_addr is just coming coming from <netdb.h> on those systems.

The Glibc header <netinet.h> indeed starts with

#ifndef _NETDB_H
#define _NETDB_H 1

#include <features.h>

#include <netinet/in.h>

so that is one data point.
Post by Philip Guenther
Most likely, however, is that the application itself isn't actually
POSIX compliant, using headers which are neither defined by POSIX nor
provided by the application, which puts it at the mercy of the system.
For example, the glibc <resolv.h> pulls in <netinet/in.h> but at least
some BSD-derived versions do _not_, requiring the application to pull
it before including <resolv.h>.
That sort of thing I absolutely wouldn't have in this program without
a ritual like:

#if HAVE_FEATUREX /* detected by configure script */
#include <featurex.h>
#endif

which would be for some very good reason.
--
TXR Programming Language: http://nongnu.org/txr
Philip Guenther
2020-12-25 09:26:11 UTC
Permalink
Post by Kaz Kylheku
Post by Philip Guenther
...
Post by Kaz Kylheku
"The <netdb.h> header may define the in_port_t type and the in_addr_t
type as described in <netinet/in.h>."
https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/netdb.h.html
It specifically doesn't say "<netdb.h> is free to define struct in_addr
and other random stuff from <netinet/in.h>". Only specific redundancy is
allowed between headers.
"Inclusion of the <netdb.h> header may also make visible all symbols
from <netinet/in.h>, <sys/socket.h>, and <inttypes.h>."
This kind of "may" stuff bugs me in specs; it just lets you get away
with writing nonportable code that breaks in another system.
The proper way to understand the above freedom is that just because you
didn't include <sys/socket.h>, don't think that you can write some macro
that clashes with that header, because <netdb.h> may also include it.
I.e. there is a threat that addition material may be present in the
translation unit tha could clause a clash---but also, don't depend on
it.
Yep, I completely agree. I get how that permission may have been necessary to 'rope in' more systems to the original POSIX goal, where a strict requirement may have had more hold-outs blocking ratification (or drop outs, that just gave up). I wonder how the austin-group "leads" would look upon a proposal to mark *all* those "may also make all symbols" clauses as obsolescent, for consideration for removal in a future revision.
Post by Kaz Kylheku
Post by Philip Guenther
However, if we go back to your original report, it's that "There were
errors about types like in_addr not being defined." If they weren't
defined, then _none_ of the headers that are required to provide them
were included. So it's not just that, say, <netdb.h> was #included
while <netinet/in.h> wasn't, because <netdb.h> is also required to
provide in_addr_t.
I should have more correctly said "struct in_addr". It is not inaddr_t.
Ah, that makes more sense, as struct in_addr is not required to be provided by <netdb.h> but can do so via that annoying 'may' clause just discussed.


Philip Guenther
Scott Lurndal
2020-12-25 16:30:46 UTC
Permalink
"Inclusion of the <netdb.h> header may also make visible all symbols=20
from <netinet/in.h>, <sys/socket.h>, and <inttypes.h>."
This kind of "may" stuff bugs me in specs; it just lets you get away=20
with writing nonportable code that breaks in another system.=20
=20
The proper way to understand the above freedom is that just because you=
=20
didn't include <sys/socket.h>, don't think that you can write some macro=
=20
that clashes with that header, because <netdb.h> may also include it.=20
=20
I.e. there is a threat that addition material may be present in the=20
translation unit tha could clause a clash---but also, don't depend on=20
it.
Yep, I completely agree. I get how that permission may have been necessary=
to 'rope in' more systems to the original POSIX goal, where a strict requi=
rement may have had more hold-outs blocking ratification (or drop outs, tha=
t just gave up). I wonder how the austin-group "leads" would look upon a p=
roposal to mark *all* those "may also make all symbols" clauses as obsolesc=
ent, for consideration for removal in a future revision.
It's simpler than that. There are dependencies between those
header files.

Consider <sys/socket.h>. It defines struct msghdr, which contains a pointer
to a struct iovec. struct iovec is defined in <sys/uio.h>, so <sys/socket.h>
is allowed to include any or all symbols from <sys/uio.h> when it is included
by the application.

This is not leave for applications to use _other_ symbols from <sys/uio.h>
when they include <sys/socket.h>, just a statement of fact that an implementation
may choose to include <sys/uio.h> to satisfy the reference to struct iovec
in <sys/socket.h>.

Each of the posix interfaces specifically states which header files that
an application must include to use the interface.
Philip Guenther
2020-12-26 05:30:36 UTC
Permalink
"Inclusion of the <netdb.h> header may also make visible all symbols=20
from <netinet/in.h>, <sys/socket.h>, and <inttypes.h>."
This kind of "may" stuff bugs me in specs; it just lets you get away=20
with writing nonportable code that breaks in another system.=20
=20
The proper way to understand the above freedom is that just because you=
=20
didn't include <sys/socket.h>, don't think that you can write some macro=
=20
that clashes with that header, because <netdb.h> may also include it.=20
=20
I.e. there is a threat that addition material may be present in the=20
translation unit tha could clause a clash---but also, don't depend on=20
it.
Yep, I completely agree. I get how that permission may have been necessary=
to 'rope in' more systems to the original POSIX goal, where a strict requi=
rement may have had more hold-outs blocking ratification (or drop outs, tha=
t just gave up). I wonder how the austin-group "leads" would look upon a p=
roposal to mark *all* those "may also make all symbols" clauses as obsolesc=
ent, for consideration for removal in a future revision.
It's simpler than that. There are dependencies between those
header files.
Consider <sys/socket.h>. It defines struct msghdr, which contains a pointer
to a struct iovec. struct iovec is defined in <sys/uio.h>, so <sys/socket.h>
is allowed to include any or all symbols from <sys/uio.h> when it is included
by the application.
That was a _choice_ by the POSIX authors, not something required by the behavior/semantics of the C language. In the case of iovec, they could have specified that both <sys/socket.h> and <sys/uio.h> declare struct iovec without permitting either to include the other, just as how the C standard requires both <stddef.h> and <stdio.h> to declare size_t without permitting either to include the other. Alternatively, they could have decided that existing practice was so far on the side of <sys/socket.h> just #including <sys/uio.h> that they would just require that <sys/socket.h> expose all symbols from <sys/uio.h>.

In this case, the latter would almost certainly have been a better choice than the current mushy "may" by eliminating the trap of applications that pull in <sys/socket.h> but not <sys/uio.h> and then use, say, writev(), which works "basically everywhere" but is non-portable.

(The reverse portability problem is also possible, I suppose, if someone develops on a OS where <sys/socket.h> _doesn't_ pull in <sys/uio.h>, as they might pull in <sys/socket.h> but define a static writev() function of their own. "Works for me!" they say, but non-portable and not detected until they try. Mandating the obvious in the standard would render such a hypothetical OS non-conforming, but remove the trap.)
This is not leave for applications to use _other_ symbols from <sys/uio.h>
when they include <sys/socket.h>, just a statement of fact that an implementation
may choose to include <sys/uio.h> to satisfy the reference to struct iovec
in <sys/socket.h>.
Each of the posix interfaces specifically states which header files that
an application must include to use the interface.
Yeah, Kaz and I both understand that.


Philip Guenther
Kaz Kylheku
2020-12-26 17:16:54 UTC
Permalink
Post by Philip Guenther
Post by Scott Lurndal
Each of the posix interfaces specifically states which header files that
an application must include to use the interface.
Yeah, Kaz and I both understand that.
Speaking of headers, I have a crazy little program that takes a C file,
and compiles it multiple times, each time with a different succesive
#include line removed. Whenever the compile is successful, it reports
that header as one of the ones without which the file still compiles.

You can iterate on this process to remove all unnecessary headers.

So that is to say, take the output of the program as a guide to remove
the unnecessary headers. Then see if that removal made any other headers
unnecessary by running the program again.

It is most useful with that style of program layout which says that
headers don't include other headers. Each C file then lists all the
needed headers in the right order, and that can sometimes bring in
unnecessary ones over time, so that a "spring cleaning" is beneficial
once in a while.

The program confirms that <netinet/in.h> is not necessary on glibc:

$ ./elimheader.txr socket.c
socket.c:30: can remove <stdlib.h>
socket.c:31: can remove <stdarg.h>
socket.c:32: can remove <string.h>
socket.c:41: can remove "alloca.h"
socket.c:45: can remove <sys/select.h>
socket.c:47: can remove <netinet/in.h>

Could it be that I had <netinet/in.h> in there once upon a time, and it
fell victim to this program? But no, no such history.
I've been careful with not carelessly taking the program's advice
w.r.t system headers.
--
TXR Programming Language: http://nongnu.org/txr
Philip Guenther
2020-12-28 09:55:39 UTC
Permalink
On Saturday, December 26, 2020 at 9:16:59 AM UTC-8, Kaz Kylheku wrote:
...
Post by Kaz Kylheku
Speaking of headers, I have a crazy little program that takes a C file,
and compiles it multiple times, each time with a different succesive
#include line removed. Whenever the compile is successful, it reports
that header as one of the ones without which the file still compiles.
I've done that many times manually, but I found I needed to use care for particular headers or else it put the portability of my code at the mercy of the strictness of my compilation environment.
Post by Kaz Kylheku
You can iterate on this process to remove all unnecessary headers.
...or at least those unnecessary for the compilation environment it's invoking against, not unnecessary per the involved standards, etc.

Though I haven't used it myself, I've seen good #include reductions from others using the tool named "include-what-you-use"
https://include-what-you-use.org/

It appears to have some modicum of "the standard says you need this header to be sure of getting that symbol, so I won't suggest you remove it if no other header you need is _required_ to pull it in, even if they might". Or at least that's what I took from talking with the people using it.


Philip Guenther

Scott Lurndal
2020-12-26 17:57:08 UTC
Permalink
Consider <sys/socket.h>. It defines struct msghdr, which contains a point=
er=20
to a struct iovec. struct iovec is defined in <sys/uio.h>, so <sys/socket=
.h>=20
is allowed to include any or all symbols from <sys/uio.h> when it is incl=
uded=20
by the application.
That was a _choice_ by the POSIX authors, not something required by the beh=
avior/semantics of the C language.
You are putting the cart before the horse.

POSIX (and other organizations such as Unix International,
X/Open, 88Open and others) never invented. We simply standardized existing practice.
There were dozens of unix variants at the time and there were long
intense discussions between the various representatives to ensure that their
implementations would be able to claim conformance. Not all implementations
that supported <sys/socket.h> implemented it the same way in those days,
but there were many existing applications that nobody wanted to break.
In particular, there were significant API differences between System V
and BSD derived implementations.
This is not leave for applications to use _other_ symbols from <sys/uio.h=
=20
when they include <sys/socket.h>, just a statement of fact that an implem=
entation=20
may choose to include <sys/uio.h> to satisfy the reference to struct iove=
c=20
in <sys/socket.h>.=20
=20
Each of the posix interfaces specifically states which header files that=
=20
an application must include to use the interface.
Yeah, Kaz and I both understand that.
But isn't that the root of your objections? That someone may not
read the specification and their application accidentally works on
some but not all implementations?
Kaz Kylheku
2020-12-26 19:03:32 UTC
Permalink
Post by Scott Lurndal
Consider <sys/socket.h>. It defines struct msghdr, which contains a point=
er=20
to a struct iovec. struct iovec is defined in <sys/uio.h>, so <sys/socket=
.h>=20
is allowed to include any or all symbols from <sys/uio.h> when it is incl=
uded=20
by the application.
That was a _choice_ by the POSIX authors, not something required by the beh=
avior/semantics of the C language.
You are putting the cart before the horse.
POSIX (and other organizations such as Unix International,
X/Open, 88Open and others) never invented. We simply standardized existing practice.
That was then, this is now. Committees for ISO C and IEEE POSIX fancy
themselves inventors now. (For values of "now" approaching "past thirty
years").

Example: before 1999, GCC was the only major compiler that had
variadic #define macros. Thus, of course, ISO C invented a different way
of doing it for C99.
Post by Scott Lurndal
Yeah, Kaz and I both understand that.
But isn't that the root of your objections? That someone may not
read the specification and their application accidentally works on
some but not all implementations?
Cases of this will happen even to those who read specs.

Until you port the code to where it breaks, you have no test case
for that possibility.
--
TXR Programming Language: http://nongnu.org/txr
Scott Lurndal
2020-12-26 19:20:41 UTC
Permalink
This post might be inappropriate. Click to display it.
Philip Guenther
2020-12-28 09:26:30 UTC
Permalink
Post by Scott Lurndal
Consider <sys/socket.h>. It defines struct msghdr, which contains a point=
er=20
to a struct iovec. struct iovec is defined in <sys/uio.h>, so <sys/socket=
.h>=20
is allowed to include any or all symbols from <sys/uio.h> when it is incl=
uded=20
by the application.
That was a _choice_ by the POSIX authors, not something required by the beh=
avior/semantics of the C language.
You are putting the cart before the horse.
POSIX (and other organizations such as Unix International,
X/Open, 88Open and others) never invented. We simply standardized existing practice.
There were dozens of unix variants at the time and there were long
intense discussions between the various representatives to ensure that their
implementations would be able to claim conformance. Not all implementations
that supported <sys/socket.h> implemented it the same way in those days,
but there were many existing applications that nobody wanted to break.
In particular, there were significant API differences between System V
and BSD derived implementations.
If you look back in the thread you'll see I wrote this:

# I get how that permission may have been necessary to 'rope in' more systems
# to the original POSIX goal, where a strict requirement may have had more
# hold-outs blocking ratification (or drop outs, that just gave up).

So, I clearly get that when these headers and interfaces were originally added to POSIX, there was push back against stricter requirements. Your name, Kaz's, and mine are all the "Austin Group Working Group Members" list in the pdf of the standard.

Does that push back still exist? If not, can we tighten some of these up?


Note that austin-group has done exactly that sort of thing--make the requirements more strict--multiple times in the past, such as the change to make mandatory a whole pile of previously optional feature groups. We understand that someone participating in POSIX meetings years ago said something like "my OS doesn't #include <sys/uio.h> from <sys/socket.h>"...but does that OS (a) still exist as a going concern, (b) still not do that, and (c) implement all the stuff that SUSv4 requires now? If not, then what value does that 'MAY' in the standard provide for future generations when no OSes that aim to comply with the standard pick the 'but may not' option?


Philip Guenther
Continue reading on narkive:
Loading...