When processing a restart request, the router sends a QUIT message to all
existing processes of the application. Then, a new shared application port is
created to ensure that new requests won't be handled by the old processes of
the application.
There is no restrictions on configration size and using segmented shared memory
only doubles memory usage because to parse configration on router side,
it needs to be 'plain' e. g. located in single continous memory buffer.
The process abstraction has changed to:
setup(task, process)
start(task, process_data)
prefork(task, process, mp)
The prefork() occurs in the main process right before fork.
The file src/nxt_main_process.c is completely free of process
specific logic.
The creation of a process now supports a PROCESS_CREATED state. The
The setup() function of each process can set its state to either
created or ready. If created, a MSG_PROCESS_CREATED is sent to main
process, where external setup can be done (required for rootfs under
container).
The core processes (discovery, controller and router) doesn't need
external setup, then they all proceeds to their start() function
straight away.
In the case of applications, the load of the module happens at the
process setup() time and The module's init() function has changed
to be the start() of the process.
The module API has changed to:
setup(task, process, conf)
start(task, data)
As a direct benefit of the PROCESS_CREATED message, the clone(2) of
processes using pid namespaces now doesn't need to create a pipe
to make the child block until parent setup uid/gid mappings nor it
needs to receive the child pid.
This is required due to lack of a graceful shutdown: there is a small gap
between the runtime's memory pool release and router process's exit. Thus, a
worker thread may start processing a request between these two operations,
which may result in an http fields hash access and subsequent crash.
To simplify issue reproduction, it makes sense to add a 2 sec sleep before
exit() in nxt_runtime_exit().
One of the ways to detect Unit's startup and subsequent readiness to accept
commands relies on waiting for the control socket file to be created.
Earlier, it was unreliable due to a race condition between the client's
connect() and the daemon's listen() calls after the socket's bind() call.
Now, unix domain listening sockets are created with a nxt_listen_socket_create()
call as follows:
s = socket();
unlink("path/to/socket.tmp")
bind(s, "path/to/socket.tmp");
listen(s);
rename("path/to/socket.tmp", "path/to/socket");
This eliminates a time-lapse when the socket file is already created but nobody
is listening on it yet, which therefore prevents the condition described above.
Also, it allows reliably detecting whether the socket is being used or simply
wasn't cleaned after the daemon stopped abruptly. A successful connection to
the socket file means the daemon has been started; otherwise, the file can be
overwritten.
Remove pid proxying to worker engines implementation was originally
overcomplicated. Memory pool and 2 engine posts (there and back again) are
optimized out and replaced with band new nxt_port_post() call.
To allow use port from different threads, the first step is to avoid using
port's memory pool for temporary allocations required to send data through
the port. Including but not limited by:
- buffers for data;
- send message structures;
- new mmap fd notifications;
It is still safe to use port memory pool for incoming buffers allocations
because recieve operation bound to single thread.
Previously, only applying of updated configuration was serialized,
while the changes themselves could be done in parallel on the same
configuration. That resulted in inconsistent behaviour.