Commit Graph

2513 Commits

Author SHA1 Message Date
Konstantin Pavlov
bad2c181e1 Packages: Added Fedora 39 support. 2024-02-09 14:31:36 -08:00
Konstantin Pavlov
ca1bc0625a contrib: updated njs to 0.8.2. 2024-02-09 14:31:36 -08:00
Konstantin Pavlov
8ebe04fd5d contrib: Bump libunit-wasm to 0.3.0. 2024-02-09 14:31:36 -08:00
Konstantin Pavlov
3a2687bb71 Packages: added Ubuntu 23.10 "mantic" support. 2024-02-09 14:31:36 -08:00
Alejandro Colomar
9e98670448 Configuration: Fix validation of "processes"
It's an integer, not a floating number.

Fixes: 68c6b67ffc ("Configuration: support for rational numbers.")
Closes: https://github.com/nginx/unit/issues/1115
Link: <https://github.com/nginx/unit/pull/1116>
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Cc: Dan Callahan <d.callahan@f5.com>
Cc: Valentin Bartenev <vbartenev@gmail.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-02-08 15:04:33 +01:00
Alejandro Colomar
46cef09f29 Configuration: Don't corrupt abstract socket names
The commit that added support for Unix sockets accepts abstract sockets
using '@' in the config, but we stored it internally using '\0'.

We want to support abstract sockets transparently to the user, so that
if the user configures unitd with '@', if we receive a query about the
current configuration, the user should see the same exact thing that was
configured.  So, this commit avoids the transformation in the internal
state file, storing user input pristine, and we only transform the '@'
in temporary strings.

This commit fixes another bug, where we try to connect to abstract
sockets with a trailing '\0' in their name due to calling twice
nxt_sockaddr_parse() on the same string.  By calling that function only
once with each copy of the string, we have fixed that bug.

The following code was responsible for this bug, which the second time
it was called, considered these sockets as file-backed (not abstract)
Unix socket, and so appended a '\0' to the socket name.

    $ grepc -tfd nxt_sockaddr_unix_parse . | grep -A10 @
        if (path[0] == '@') {
            path[0] = '\0';
            socklen--;
    #if !(NXT_LINUX)
            nxt_thread_log_error(NXT_LOG_ERR,
                                 "abstract unix domain sockets are not supported");
            return NULL;
    #endif
        }

        sa = nxt_sockaddr_alloc(mp, socklen, addr->length);

This bug was found thanks to some experiment about using 'const' for
some strings.

And here's some history:

-  9041d276fc ("nxt_sockaddr_parse() introducted.")

   This commit introduced support for abstract Unix sockets, but they
   only worked as "servers", and not as "listeners".  We corrupted the
   JSON config file, and stored a \u0000.  This also caused calling
   connect(2) with a bogus trailing null byte, which tried to connect to
   a different abstract socket.

-  d8e0768a5b ("Fixed support for abstract Unix sockets.")

   This commit (partially) fixed support for abstract Unix sockets, so
   they they worked also as listeners.  We still corrupted the JSON
   config file, and stored a \u0000.  This caused calling connect(2)
   (and now bind(2) too) with a bogus trailing null byte.

-  e2aec6686a ("Storing abstract sockets with @ internally.")

   This commit fixed the problem by which we were corrupting the config
   file, but only for "listeners", not for "servers".  (It also fixes
   the issue about the terminating '\0'.)  We completely forgot about
   "servers", and other callers of the same function.

To reproduce the problem, I used the following config:

```json
{
	"listeners": {
		"*:80": {
			"pass": "routes/u"
		},
		"unix:@abstract": {
			"pass": "routes/a"
		}
	},

	"routes": {
		"u": [{
			"action": {
				"pass": "upstreams/u"
			}
		}],
		"a": [{
			"action": {
				"return": 302,
				"location": "/i/am/not/at/home/"
			}
		}]
	},

	"upstreams": {
		"u": {
			"servers": {
				"unix:@abstract": {}
			}
		}
	}
}
```

And then check the state file:

    $ sudo cat /opt/local/nginx/unit/master/var/lib/unit/conf.json \
    | jq . \
    | grep unix;
        "unix:@abstract": {
            "unix:\u0000abstract": {}

After this patch, the state file has a '@' as expected:

    $ sudo cat /opt/local/nginx/unit/unix/var/lib/unit/conf.json \
    | jq . \
    | grep unix;
        "unix:@abstract": {
            "unix:@abstract": {}

Regarding the trailing null byte, here are some tests:

    $ sudo strace -f -e 'bind,connect' /opt/local/nginx/unit/d8e0/sbin/unitd \
    |& grep abstract;
    [pid 22406] bind(10, {sa_family=AF_UNIX, sun_path=@"abstract\0"}, 12) = 0
    [pid 22410] connect(134, {sa_family=AF_UNIX, sun_path=@"abstract\0"}, 12) = 0
    ^C
    $ sudo killall unitd
    $ sudo strace -f -e 'bind,connect' /opt/local/nginx/unit/master/sbin/unitd \
    |& grep abstract;
    [pid 22449] bind(10, {sa_family=AF_UNIX, sun_path=@"abstract"}, 11) = 0
    [pid 22453] connect(134, {sa_family=AF_UNIX, sun_path=@"abstract\0"}, 12) = -1 ECONNREFUSED (Connection refused)
    ^C
    $ sudo killall unitd
    $ sudo strace -f -e 'bind,connect' /opt/local/nginx/unit/unix/sbin/unitd \
    |& grep abstract;
    [pid 22488] bind(10, {sa_family=AF_UNIX, sun_path=@"abstract"}, 11) = 0
    [pid 22492] connect(134, {sa_family=AF_UNIX, sun_path=@"abstract"}, 11) = 0
    ^C

Fixes: 9041d276fc ("nxt_sockaddr_parse() introducted.")
Fixes: d8e0768a5b ("Fixed support for abstract Unix sockets.")
Fixes: e2aec6686a ("Storing abstract sockets with @ internally.")
Link: <https://github.com/nginx/unit/pull/1108>
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Cc: Liam Crilly <liam.crilly@nginx.com>
Cc: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-02-05 18:37:37 +01:00
Alejandro Colomar
bb376c6838 Simplify, by calling nxt_conf_get_string_dup()
Refactor.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Cc: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-02-05 18:37:32 +01:00
Alejandro Colomar
ecd573924f Configuration: Add nxt_conf_get_string_dup()
This function is like nxt_conf_get_string(), but creates a new copy,
so that it can be modified without corrupting the configuration string.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Cc: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-02-05 18:37:21 +01:00
Andrew Clayton
990fbe7010 Configuration: Remove procmap validation code
With the previous commit which introduced the use of the
NXT_CONF_VLDT_REQUIRED flag, we no longer need to do this separate
validation, it's only purpose was to check if the three uidmap/gidmap
settings had been provided.

Reviewed-by: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-30 01:27:25 +00:00
Andrew Clayton
eba7378d4f Configuration: Use the NXT_CONF_VLDT_REQUIRED flag for procmap
Use the NXT_CONF_VLDT_REQUIRED flag on the app_procmap members. These
three settings are required.

These are for the uidmap & gidmap settings in the config.

Suggested-by: Zhidao HONG <z.hong@f5.com>
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-30 01:27:25 +00:00
Andrew Clayton
f7c9d3a8b3 Isolation: Use an appropriate type for storing uid/gids
Andrei reported an issue on arm64 where he was seeing the following
error message when running the tests

  2024/01/17 18:32:31.109 [error] 54904#54904 "gidmap" field has an entry with "size": 1, but for unprivileged unit it must be 1.

This error message is guarded by the following if statement

  if (nxt_slow_path(m.size > 1)

Turns out size was indeed > 1, in this case it was 289356276058554369,
m.size is defined as a nxt_int_t, which on arm64 is actually 8 bytes,
but was being printed as a signed int (4 bytes) and by chance/undefined
behaviour comes out as 1.

But why is size so big? In this case it should have just been 1 with a
config of

  'gidmap': [{'container': 0, 'host': os.getegid(), 'size': 1}],

This is due to nxt_int_t being 64bits on arm64 but using a conf type of
NXT_CONF_MAP_INT which means in nxt_conf_map_object() we would do (using
our m.size variable as an example)

  ptr = nxt_pointer_to(data, map[i].offset);
  ...
  ptr->i = num;

Where ptr is a union pointer and is now pointing at our m.size

Next we set m.size to the value of num (which is 1 in this case), via
ptr->i where i is a member of that union of type int.

So here we are setting a 64bit memory location (nxt_int_t on arm64)
through a 32bit (int) union alias, this means we are only setting the
lower half (4) of the bytes.

Whatever happens to be in the upper 4 bytes will remain, giving us our
exceptionally large value.

This is demonstrated by this program

  #include <stdio.h>
  #include <stdint.h>

  int main(void)
  {
          int64_t num = -1; /* All 1's in two's complement */
          union {
                  int32_t i32;
                  int64_t i64;
          } *ptr;

          ptr = (void *)&num;

          ptr->i32 = 1;
          printf("num : %lu / %ld\n", num, num);
          ptr->i64 = 1;
          printf("num : %ld\n", num);

          return 0;
  }
  $ make union-32-64-issue
  cc     union-32-64-issue.c   -o union-32-64-issue
  $ ./union-32-64-issue
  num : 18446744069414584321 / -4294967295
  num : 1

However that is not the only issue, because the members of
nxt_clone_map_entry_t were specified as nxt_int_t's on the likes of
x86_64 this would be a 32bit signed integer. However uid/gids on Linux
at least are defined as unsigned integers, so a nxt_int_t would not be
big enough to hold all potential values.

We could make the nxt_uint_t's but then we're back to the above union
aliasing problem.

We could just set the memory for these variables to 0 and that would
work, however that's really just papering over the problem.

The right thing is to use a large enough sized type to store these
things, hence the previously introduced nxt_cred_t. This is an int64_t
which is plenty large enough.

So we switch the nxt_clone_map_entry_t structure members over to
nxt_cred_t's and use NXT_CONF_MAP_INT64 as the conf type, which then
uses the right sized union member in nxt_conf_map_object() to set these
variables.

Reported-by: Andrei Zeliankou <zelenkov@nginx.com>
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-30 01:27:25 +00:00
Andrew Clayton
9919b50aec Isolation: Add a new nxt_cred_t type
This is a generic type to represent a uid_t/gid_t on Linux when user
namespaces are in use.

Technically this only needs to be an unsigned int, but we make it an
int64_t so we can make use of the existing NXT_CONF_MAP_INT64 type.

This will be used in subsequent commits.

Reviewed-by: Zhidao Hong <z.hong@f5.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-30 01:27:25 +00:00
Andrei Zeliankou
ad3645074e Tests: "if" option in access logging.
Conditional access logging was introduced here:
4c91bebb50
2024-01-29 17:54:26 +00:00
Zhidao HONG
dcbff27d9b Docs: Update changes.xml for conditional access logging 2024-01-29 21:10:31 +08:00
Zhidao HONG
4c91bebb50 HTTP: enhanced access log with conditional filtering.
This feature allows users to specify conditions to control if access log
should be recorded. The "if" option supports a string and JavaScript code.
If its value is empty, 0, false, null, or undefined, the logs will not be
recorded. And the '!' as a prefix inverses the condition.

Example 1: Only log requests that sent a session cookie.

    {
        "access_log": {
            "if": "$cookie_session",
            "path": "..."
        }
    }

Example 2: Do not log health check requests.

    {
        "access_log": {
            "if": "`${uri == '/health' ? false : true}`",
            "path": "..."
        }
    }

Example 3: Only log requests when the time is before 22:00.

    {
        "access_log": {
            "if": "`${new Date().getHours() < 22}`",
            "path": "..."
        }
    }

or

    {
        "access_log": {
            "if": "!`${new Date().getHours() >= 22}`",
            "path": "..."
        }
    }

Closes: https://github.com/nginx/unit/issues/594
2024-01-29 13:48:53 +08:00
Zhidao HONG
37abe2e463 HTTP: refactored out nxt_http_request_access_log().
This is in preparation for adding conditional access logging.
No functional changes.
2024-01-29 12:10:37 +08:00
Andrei Zeliankou
6452ca111c Node.js: fixed "httpVersion" variable format
According to the Node.js documenation this variable
should only include numbering scheme.

Thanks to @dbit-xia.

Closes: https://github.com/nginx/unit/issues/1085
2024-01-26 15:17:00 +00:00
Alejandro Colomar
ba56e50ee7 Tools: setup-unit: -hh: Add short-cut for the advanced help
I hate having to type so much just for the useful help.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-01-23 18:16:02 +01:00
Alejandro Colomar
034b6394a4 Tools: setup-unit: -hh: The advanced commands aren't experimental
I've been using them for a long time, and they are quite useful and
stable.  Let's say they're advanced instead of experimental.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-01-23 18:15:53 +01:00
Alejandro Colomar
af6833a182 Tools: setup-unit: -hh: Add missing documentation for 'restart'
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Alejandro Colomar <alx@kernel.org>
2024-01-23 18:15:26 +01:00
Andrew Clayton
02d1984c91 HTTP: Remove short read check in nxt_http_static_buf_completion()
On GH, @tonychuuy reported an issue when using Units 'share' action they
would get the following error in the unit log

  2024/01/15 17:53:41 [error] 49#52 *103 file "/var/www/html/public/vendor/telescope/app.css" has changed while sending response to a client

This would happen when trying to serve files over a certain size and the
requested file would not be sent.

This is due to a somewhat bogus check in
nxt_http_static_buf_completion()

I say bogus because it's not clear what the check is trying to
accomplish and the error message is not entirely accurate either.

The check in question goes like

    n = pread(file->fd, buf, size, offset);
    return n;
    ...
    if (n != size) {
        if (n >= 0) {
            /* log file changed error and finish */

            /* >> Problem is here << */
        }

       	/* log general error and finish */
    }

If the number of bytes read is not what we asked for and is > -1 (i.e
not an error) then it says the file has changed, but really it only
checks if the file has _shrunk_ (we can't get back _more_ bytes than we
asked for) since it was stat'd.

This is what happens

  recvfrom(22, "GET /tfile HTTP/1.1\r\nHost: local"..., 2048, 0, NULL, NULL) = 82
  openat(AT_FDCWD, "/mnt/9p/tfile", O_RDONLY|O_NONBLOCK) = 23
  newfstatat(23, "", {st_mode=S_IFREG|0644, st_size=149922, ...}, AT_EMPTY_PATH) = 0

We get a request from a client, open the requested file and stat(2) it to
get the file size.

We would then go into a pread/writev loop reading the file data and
sending it to the client until it's all been sent.

However what was happening in this case was this (showing a dummy file
of 149922 bytes)

  pread64(23, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072, 0) = 61440
  write(2, "2024/01/17 15:30:50 [error] 1849"..., 109) = 109

We wanted to read 131072 bytes but only read 61440 bytes, the above
check triggered and the file transfer was aborted and the above error
message logged.

Normally for a regular file you will only get less bytes than asked for
if the read call is interrupted by a signal or you're near the end of
file.

There is however at least another situation where this may happen, if
the file in question is being served from a network filesystem.

It turns out that was indeed the case here, the files where being served
over the 9P filesystem protocol. Unit was running in a docker container
in an Ubuntu VM under Windows/WSL2 and the files where being passed
through to the VM from Windows over 9P.

Whatever the intention of this check, it is clearly causing issues in
real world scenarios.

If it was really desired to check if the had changed since it was
opened/stat'd then it would require a different methodology and be a
patch for another day. But as it stands this current check does more
harm than good, so lets just remove it.

With it removed we now get for the above test file

  recvfrom(22, "GET /tfile HTTP/1.1\r\nHost: local"..., 2048, 0, NULL, NULL) = 82
  openat(AT_FDCWD, "/mnt/9p/tfile", O_RDONLY|O_NONBLOCK) = 23
  newfstatat(23, "", {st_mode=S_IFREG|0644, st_size=149922, ...}, AT_EMPTY_PATH) = 0
  mmap(NULL, 135168, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f367817b000
  pread64(23, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 131072, 0) = 61440
  pread64(23, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 18850, 61440) = 18850
  writev(22, [{iov_base="HTTP/1.1 200 OK\r\nLast-Modified: "..., iov_len=171}, {iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=61440}, {iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=18850}], 3) = 80461
  pread64(23, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 69632, 80290) = 61440
  pread64(23, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 8192, 141730) = 8192
  close(23)                   = 0
  writev(22, [{iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=61440}, {iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., iov_len=8192}], 2) = 69632

So we can see we do two pread(2)s's and a writev(2), then another two
pread(2)s and another writev(2) and all the file data has been read and
sent to the client.

Reported-by: tonychuuy <https://github.com/tonychuuy>
Link: <https://en.wikipedia.org/wiki/9P_(protocol)>
Fixes: 08a8d1510 ("Basic support for serving static files.")
Closes: https://github.com/nginx/unit/issues/1064
Reviewed-by: Zhidao Hong <z.hong@f5.com>
Reviewed-by: Andrei Zeliankou <zelenkov@nginx.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-20 03:39:57 +00:00
Andrei Zeliankou
4e08f49549 Tests: added Ruby tests with array in header values 2024-01-16 15:59:30 +00:00
Andrei Zeliankou
a1e00b4e28 White space formatting fixes
Closes: <https://github.com/nginx/unit/pull/1062>
2024-01-16 15:37:07 +00:00
Andrei Zeliankou
5a8337933d Tests: pathlib used where appropriate
Also fixed various pylint errors and style issues.
2024-01-15 15:48:58 +00:00
Andrew Clayton
e95a91cbfa .mailmap: Add a few more entries
Fix up a mixture of different names/email addresses people have used.

You can always see the original names/addresses used by passing
--no-mailmap to the various git commands.

See gitmailmap(5)

Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-12 18:33:27 +00:00
Konstantin Pavlov
b04455f6c1 Updated security.txt
Refs: https://github.com/nginx/unit-docs/pull/78
2024-01-11 11:45:20 -05:00
Andrew Clayton
6ee5d5553f .mailmap: Fix up Taryn's email address
Map her GitHub noreply address to her @f5 one.

You can always see the original address used by passing --no-mailmap to
the various git commands.

Note: We don't always need the name field, but we're keeping this file
consistent and alphabetically ordered on first name...

Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2024-01-11 01:18:57 +00:00
Danielle De Leo
7e03a6cc6b Go: Add missing +build and go:build comments
A RHEL 8 test was failing because it uses go1.16. The old style must
be retained for backwards compat.

Fixes: 9a36de84c ("Go: Use Homebrew include paths")
Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Reviewed-by: Dylan Arbour <d.arbour@f5.com>
Signed-off-by: Danielle De Leo <d.deleo@f5.com>
2024-01-10 11:15:48 -05:00
Taryn Musgrave
263460d930 Docs: replaced the slack community links with GitHub Discussions 2024-01-10 17:12:05 +01:00
Zhidao HONG
49aee6760a HTTP: added TSTR validation flag to the rewrite option.
This is to improve error messages for rewrite configuration.
Take the configuration as an example:

  {
      "rewrite": "`${a + "
  }

Previously, when applying it the user would see this error message:

  failed to apply previous configuration

After this change, the user will see this improved error message:

  the previous configuration is invalid: "SyntaxError: Unexpected end of input in default:1" in the "rewrite" value.
2023-12-14 16:38:24 +08:00
Andrew Clayton
88854cf146 Ruby: Prevent a possible integer underflow
Coverity picked up a potential issue with the previous commit d9f5f1fb7
("Ruby: Handle response field arrays") in that a size_t could wrap
around to SIZE_MAX - 1.

This would happen if we were given an empty array of header values.

Fixes: d9f5f1fb7 ("Ruby: Handle response field arrays")
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-12-13 03:20:25 +00:00
Andrew Clayton
d9f5f1fb74 Ruby: Handle response field arrays
@xeron on GitHub reported an issue whereby with a Rails 7.1 application
they were getting the following error

  2023/10/22 20:57:28 [error] 56#56 [unit] #8: Ruby: Wrong header entry 'value' from application
  2023/10/22 20:57:28 [error] 56#56 [unit] #8: Ruby: Failed to run ruby script

After some back and forth debugging it turns out rack was trying to send
back a header comprised of an array of values. E.g

  app = Proc.new do |env|
      ["200", {
          "Content-Type" => "text/plain",
          "X-Array-Header" => ["Item-1", "Item-2"],
      }, ["Hello World\n"]]
  end

  run app

It seems this became a possibility in rack v3.0[0]

So along with a header value type of T_STRING we need to also allow
T_ARRAY.

If we get a T_ARRAY we need to build up the header field using the given
values.

E.g

  "X-Array-Header" => ["Item-1", "", "Item-3", "Item-4"],

becomes

  X-Array-Header: Item-1; ; Item-3; Item-4

[0]: <https://github.com/rack/rack/blob/main/UPGRADE-GUIDE.md?plain=1#L26>

Reported-by: Ivan Larionov <xeron.oskom@gmail.com>
Closes: <https://github.com/nginx/unit/issues/974>
Link: <https://github.com/nginx/unit/pull/998>
Tested-by: Timo Stark <t.stark@nginx.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-12-08 13:48:33 +00:00
Andrew Clayton
846a7f4836 .mailmap: Set correct address for Danielle
Due to GH making a mess of merge commits, it used Danielle's personal
email address for the merge, it also used a generic GH address for the
committer but we can't do anything about that. However we can fix the
'Author' email address.

If for some reason you want to see the original names/addresses used you
can generally pass --no-mailmap to git commands.

Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-12-06 14:08:35 +00:00
Dani De Leo
f26bd644fe Merge pull request #1017 from danielledeleo/ldflags-brew
Go: Use Homebrew include paths
2023-12-05 14:20:22 -05:00
Danielle De Leo
9a36de84c8 Go: Use Homebrew include paths
Fixes nginx/unit#967
2023-12-05 13:00:20 -05:00
Sergey A. Osokin
a922f9a6f0 Update third-party components for the Java module. 2023-11-29 10:28:44 -05:00
Chris Adams
3fdf8c63a2 Fix port number in listener object for php hello world app.
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-21 14:01:40 +00:00
Andrew Clayton
73d723e56a Red Hat should always be spelled as two words.
Link: <https://www.redhat.com/en/about/brand/new-brand/details>
Link: <https://www.redhat.com/en/about/brand/standards/trademarks>
Cc: Artem Konev <artem.konev@nginx.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-21 13:50:09 +00:00
Sergey A. Osokin
6b6e3bd897 Fixed the MD5Encoder deprecation warning. 2023-11-20 10:56:41 -05:00
Andrei Zeliankou
0fc5232107 Tests: added more expected Ruby features. 2023-11-17 17:28:52 +00:00
Andrei Zeliankou
8fbe437ca6 Tests: Ruby input.rewind is no longer required.
For more information see:
42aff22f70
2023-11-17 17:28:44 +00:00
Andrei Zeliankou
1443d623d4 Node.js: ServerResponse.flushHeaders() implemented.
This closes #1006 issue on GitHub.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-17 17:27:31 +00:00
Andrew Clayton
919cae7ff9 PHP: Fix a possible file-pointer leak.
In nxt_php_execute() it is possible we could bail out before cleaning up
the FILE * representing the PHP script to execute.

At this point we only need to call fclose(3) on it.

We could have possibly moved the opening of this file to later in the
function, but it is probably good to bail out as early as possible if we
can't open it.

This was found by Coverity.

Fixes: bebc03c72 ("PHP: Implement better error handling.")
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-15 03:34:49 +00:00
Andrei Vasiliu
27c787f437 Fix comments for src/nxt_unit.h.
This fixes some typos and grammatical errors in the comments of
src/nxt_unit.h

Link: <https://github.com/nginx/unit/pull/889>
[ Adjust summary and write commit message as this just contains the
  fixes from the PR and not actual changes - Andrew ]
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-14 16:48:16 +00:00
David CARLIER
dfdf948f89 Define nxt_cpu_pause for ARM64.
The isb instruction fits for spin loops where it allows to save cpu
power.

Reviewed-by: Andrew Clayton <a.clayton@nginx.com>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-10 02:59:49 +00:00
Andrew Clayton
5cfad9cc0b Python: Fix header field values character encoding.
On GitHub, @RomainMou reported an issue whereby HTTP header field values
where being incorrectly reported as non-ascii by the Python .isacii()
method.

For example, using the following test application

  def application(environ, start_response):
      t = environ['HTTP_ASCIITEST']

      t = "'" + t + "'" +  " (" + str(len(t)) + ")"

      if t.isascii():
          t = t + " [ascii]"
      else:
          t = t + " [non-ascii]"

      resp = t + "\n\n"

      start_response("200 OK", [("Content-Type", "text/plain")])
      return (bytes(resp, 'latin1'))

You would see the following

  $ curl -H "ASCIITEST: $" http://localhost:8080/
  '$' (1) [non-ascii]

'$' has an ASCII code of 0x24 (36).

The initial idea was to adjust the second parameter to the
PyUnicode_New() call from 255 to 127. This unfortunately had the
opposite effect.

  $ curl -H "ASCIITEST: $" http://localhost:8080/
  '$' (1) [ascii]

Good. However...

  $ curl -H "ASCIITEST: £" http://localhost:8080/
  '£' (2) [ascii]

Not good. Let's take a closer look at this.

'£' is not in basic ASCII, but is in extended ASCII with a value of 0xA3
(163). Its UTF-8 encoding is 0xC2 0xA3, hence the length of 2 bytes
above.

  $ strace -s 256 -e sendto,recvfrom curl -H "ASCIITEST: £" http://localhost:8080/
  sendto(5, "GET / HTTP/1.1\r\nHost: localhost:8080\r\nUser-Agent: curl/8.0.1\r\nAccept: */*\r\nASCIITEST: \302\243\r\n\r\n", 92, MSG_NOSIGNAL, NULL, 0) = 92
  recvfrom(5, "HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nServer: Unit/1.30.0\r\nDate: Mon, 22 May 2023 12:44:11 GMT\r\nTransfer-Encoding: chunked\r\n\r\n12\r\n'\302\243' (2) [ascii]\n\n\r\n0\r\n\r\n", 102400, 0, NULL, NULL) = 160
  '£' (2) [ascii]

So we can see curl sent it UTF-8 encoded '\302\243\' which is C octal
escaped UTF-8 for 0xC2 0xA3, and we got the same back. But it should not
be marked as ASCII.

When doing PyUnicode_New(size, 127) it sets the buffer as ASCII. So we
need to use another function and that function would appear to be

  PyUnicode_DecodeCharmap()

Which creates an Unicode object with the correct ascii/non-ascii
properties based on the character encoding.

With this function we now get

  $ curl -H "ASCIITEST: $" http://localhost:8080/
  '$' (1) [ascii]

  $ curl -H "ASCIITEST: £" http://localhost:8080/
  '£' (2) [non-ascii]

and for good measure

  $ curl -H "ASCIITEST: $ £" http://localhost:8080/
  '$ £' (4) [non-ascii]

  $ curl -H "ASCIITEST: $" -H "ASCIITEST: £" http://localhost:8080/
  '$, £' (5) [non-ascii]

PyUnicode_DecodeCharmap() does require having the full string upfront so
we need to build up the potentially comma separated header field values
string before invoking this function.

I did not want to touch the Python 2.7 code (which may or may not even
be affected by this) so kept these changes completely isolated from
that, hence a slight duplication with the for () loop.

Python 2.7 was sunset on January 1st 2020[0], so this code will
hopefully just disappear soon anyway.

I also purposefully didn't touch other code that may well have similar
issues (such as the HTTP header field names) if we ever get issue
reports about them, we'll deal with them then.

[0]: <https://www.python.org/doc/sunset-python-2/>

Link: <https://docs.python.org/3/c-api/unicode.html>
Closes: <https://github.com/nginx/unit/issues/868>
Reported-by: RomainMou <https://github.com/RomainMou>
Tested-by: RomainMou <https://github.com/RomainMou>
Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-09 17:53:09 +00:00
Andrew Clayton
dd0c53a77d Python: Do nxt_unit_sptr_get() earlier in nxt_python_field_value().
This is a preparatory patch for fixing an issue with the encoding of
http header field values.

This patch simply moves the nxt_unit_sptr_get() to the top of the
function where we will need it in the next commit.

Signed-off-by: Andrew Clayton <a.clayton@nginx.com>
2023-11-08 21:53:46 +00:00
Andrei Zeliankou
0b85fe29f7 Tests: 8XXX used as default port range.
After the launch of the project, the testing infrastructure was shared with
nginx project in some cases.  To avoid port overlap, a decision was made
to shift the port range for Unit tests.  This problem was resolved a long time
ago and is no longer relevant, so it is now safe to use port 8XXX range as the
default, as it is more appropriate for testing purposes.
2023-11-08 18:37:02 +00:00
Andrei Zeliankou
78c133d0ca Var: simplified length calculation for $status variable. 2023-11-08 17:38:07 +00:00
Andrei Zeliankou
a88e857b5b Var: $request_id variable.
This variable contains a string that is formed using random data and
can be used as a unique request identifier.

This closes #714 issue on GitHub.
2023-11-08 17:34:59 +00:00