Ep. 27 uWSGI Decorators
10 Apr 2019
uWSGI Decorators | Learning Flask Ep. 27
Many of you will be familiar with uWSGI and typically use it as an application or web server for your Python apps, à la Flask or Django.
But did you know that uWSGI has WAY more in store?
After spending more time with uWSGI and digging through the documentation, I've come to understand why it's called the uWSGI project...
Task queues, cron jobs, file/directory monitoring, threads, spools, locks, mules, timers & more.. All with a simple Python decorator!
The uWSGI functionality is vast and ranges from extremely low to high level, however in this guide I'm going to give you an introduction to some of the awesome decorators available in this package using Flask.
Installing uWSGI
Before installing uWSGI, I highly recommend you create a virtual environment in a new directory and activate it:
Install Flask and uWSGI with pip
:
Flask skeleton
With our dependencies installed, we can build our Flask application skeleton in run.py
:
uWSGI config file
We can start our app on the commend line with the uwsgi
command as pass it some arguments, but for simplicity, we'll create a configuration file called app.ini
:
We're going to be running our app on localhost, port 8080
.
Feel free to change the number of processes/threads to better match your machine specs. We're going to come back to this file shortly.
Running the app with uWSGI
To run the application with uWSGI, simply call the uwsgi
command and pass it the name of the configuration file:
You should be able to go to http://localhost:8080/ and see Hello world
in the browser.
To stop uWSGI, simply hit Ctrl + c
in the terminal.
uWSGI decorators
uWSGI comes with a range of useful decorators which become available ONLY when running with uWSGI.
This means you're unable to use these decorators when running your application with the Flask development server.
So that we don't have to keep on stopping and starting our app from the command line, we can add something to our app.ini
file to reload our app as we change it.
Note - This should only be used in development
Open up app.ini
and add the following:
This will watch our application for changes every 2 seconds.
To access the uWSGI decorators, we need to import them. We'll import everything for now but feel free to only import the individual functions:
We'll also import time
:
@timer
The first decorator we're going to look at is timer
.
This decorator allows us to execute a function at regular intervals:
We set the interval in seconds by passing the number of seconds to the @timer
decorator.
After running the app and waiting for 10 seconds or so, you'll see the following output in the terminal:
This function will keep on running at regular 3 second intervals for as long as your application is running.
@filemon
The filemon
decorator will execute a function every time a file or directory is modified.
We're going to create a directory named log
containing a file called test.log
in the same directory as run.py
and app.ini
:
We'll create 2 decorated functions using the filemon
decorator. One to watch a file and one to watch a directory:
With the app running, go ahead and edit test.log
. You'll see output in the terminal:
Adding or removing a file/directory in the log
directory will trigger the directory_has_been_modified
function:
@cron
The cron
decorator allows us to easily register cron jobs.
We'll create a cron job to run every minute:
And another cron job to run at 5:30 every day:
Fortunately for us, it's just turned 17:30pm and our cron job has just ran:
@mulefunc
Mules can be considered as a primitive task queue, allowing us to offload tasks to a mule to be executed in the background, allowing our application to return a response and have the mule handle the task.
Before we can use the mulefunc
decorator, we need to declare it in app.ini
:
There's lots of interesting things we can do with mules, however in this example we're just going to create the one. Read more about mules here
To create a mule function, decorate a function with the @mulefunc
decorator, passing any arguments into the function itself.
We'll create a simple mulefunc
that takes an integer as an argument:
We'll also create a new route in our app to trigger the mule:
We can trigger the mulefunc
by sending a query string in the URL with a value for n
:
Sending a request to this url will return the text Mule
immidiately, whilst the fucntion is executed in the background.
In the terminal, you'll see:
@spool
The uWSGI spooler is a task queue/manager that works like many other popular task queue systems, allowing us to return a response whilst a task is offloaded to be processed in the background.
A spooler works by defining a directory that "spool files" are written to. Spool functions are then ran when the spooler finds a file in the directory.
As with mules, there's lots of advanced things you can do with spoolers and we're only going to cover the basics. To learn more, read the uWSGI spooler docs.
Spooling has a fev advantages over mules including:
Spooled tasks will be restarted/retried if uWSGI crashes or is stopped as task information is stored in files
Spooled tasks are not limited to a 64 kb parameter size
Spoolers offer generally more flexibility and configuration
To work with the spooler, we first need to create the "spool file". We'll call it tasks
:
We then need to tell uWSGI about our spool file. We can do so in our app.ini
file:
With the directory created and configuration file updated, we can use the @spool
decorator.
We'll start by creating a basic spool that doesn't require any arguments when called:
You'll notice we've passed args
to the function, we'll cover that shortly.
We'll create a new route to trigger the spooler:
You'll notice we're calling a_basic_spool_function.spool()
without passing in any arguments.
Go to http://localhost:8080/spool to trigger the spooler and keep an eye on the terminal.
The value for args
:
Information about the request:
The spool function output:
To pass arguments to a @spool
function, we can add pass_arguments=True
and pass in any values supported by the pickle
module.
Let's create another function that takes an int
as an argument. We'll use the /spool
route to trigger it:
Trigger the function by heading to /spool
. You'll see in the terminal:
The route returned an immidiate response whilst our function was executed in the background.
We can in fact pass any kind of Python object to a spool function, providing they can be pickled:
We can trigger the spooled function with:
Accessing the route will print the following:
When passing arguments to a spooled function, some arguments have a special meaning and must be bytes:
spooler
: specify the absolute path of the spooler that has to manage this taskat
: unix time at which the task must be executed (the task will not be run until theat
time is passed)priority
: this will be the subdirectory in the spooler directory in which the task will be placed, you can use that trick to give a good-enough prioritization to tasks (for better approach use multiple spoolers)
Spooler priority
One of the nice things about spoolers is the ability to set a simple priority queue, using numbers to indicate the priority.
Providing a priority
argument will give order to the spooler parsing, creating numbered directories in your "spool file", each containing their respective tasks.
To setup a priority queue, we need to add a couple more options to our uWSGI ini
config:
Priority queues only work when spooler-ordered
is enabled, allowing the spooler to scan the directories in alphabetical order (The spooler will do its best to maintain the priority order)
spooler-frequency
isn't required, but will activate the spooler after n
seconds if any tasks aren't executed.
For now, we'll just create a simple spool
fucntion and call it from the /spool
route:
In our route, we'll call the spool
function 4 times, setting a priority for each call:
You'll notice we've provided the special priority
parameter, with a binary version of the priority we wish to assign to the task, with descending priority.
When we request this route, the spool
functions will be called and a directory will be created for each level of priority within the tasks
directory (the "spool directory" we created earlier).
The spooler will do its best to run the spooled functions in order of priority, but it can't be guaranteed (from my initial testing)
Accessing this route, we see the following output:
Not quite in the order or priority, but I'm sure there's something I'm missing (this was just after some initial testing)
Another area I'm having mixed results is with the spool function return values.
Looking through the documentation, we have an option to return 3 values:
uwsgi.SPOOL_OK
- The task has been completed and the spool file will be removeduwsgi.SPOOL_RETRY
- Something went wrong and the task will be retried in the next spooler iterationuwsgi.SPOOL_IGNORE
- Ignore the task
My initial testing and thoughts:
My idea was to call each spool function, expecting the spool file for spool_retry
to remain in the spool file:
However after missing something in the documentation, I found out that we can use the @spoolraw
decorator to control the return values of a spool!
@spoolraw
To control the return value of a spool, we can use the spoolraw
decorator, returning 3 possible values:
uwsgi.SPOOL_OK
- The task has been completed and the spool file will be removeduwsgi.SPOOL_RETRY
- Something went wrong and the task will be retried in the next spooler iterationuwsgi.SPOOL_IGNORE
- Ignore the task - If multiple languages are loaded in the instance all of them will fight for managing the task. This return values allows you to skip a task in specific languages
Let's re-run the same tests as above using the spoolraw
decorator:
Calling the functions:
And now, as expected:
spool_ok
- Ran succesfully and the spool file was removedspool_retry
- Ran but returned the retry signal. The spool file was kept and retried every 3 seconds (thespooler-frequency
we set in theini
file)spool_ignore
- Was ignored and the spool file remained, producing the following output every 3 seconds
Which makes sense as we told uWSGI to ignore it.
These options make it easy for us to retry a task if a condition isn't met or there's an exception in the function, for example:
@spoolforever
Need a function to run forever? use the @spoolforever
decorator.
Calling it from our route:
The forever_and_ever
function will now run forever, even after stopping and starting the application.
If you need to remove a spoolforever
task, you'll have to delete the spool file found in the spool folder.
@thread
The thread
decorator can be used to execute a function in a separate thread.
To enable threading, you must add it as an option in your ini
file or pass it to uwsgi
as an argument on the cli:
If you're following along, we already set a value for threads
in app.ini
.
Let's decorate 3 functions with the @thread
decorator and call them from the index
route:
We'll call the functions from the index
route:
Upon requesting the route, we see the following output:
@postfork
The postfork
decorator allows us to decorate functions that will be executed when uWSGI forks the application.
From the uWSGI docs:
"uWSGI is a preforking (or “fork-abusing”) server, so you might need to execute a fixup task after each fork(). The postfork decorator is just the ticket. You can declare multiple postfork tasks. Each decorated function will be executed in sequence after each fork()."
For example, you may want to reconnect to a database after forking:
Any functions decorated with @postfork
will be executed sequentially. Let's add another one:
When we first startup our app, uWSGI will fork based on how many processes
we set in the ini
file:
@lock
The lock
decorator will execute a function in a fully locked environment.
From the uWSGI docs:
"This decorator will execute a function in fully locked environment, making it impossible for other workers or threads (or the master, if you’re foolish or brave enough) to run it simultaneously."
To create a lock
function, simply decorate it with @lock
:
We'll call it from the index
route:
Requesting the index
route, we see:
To better illustrate the @lock
decorator, we can combine it with the @timer
decorator:
locked_function
as expected will run every 2 seconds and print Concurrency is for fools!
to the terminal:
However, if we modify the function to include a delay:
The timer
will attemp to run locked_function
every 2 seconds, but due to the @lock
decorator and addition of adding a 4 second delay, the funcion is not ran and instead has to wait for the delay to finish.
We can see this is the terminal output:
If you have a function that must not be called by any other process, @lock
is your friend.
Other decorators
Some other interesting decorators, out of the scope of this guide include:
@hakari(n)
- kill a worker if the given call is taking too long@rpc('x')
- Used for remotely calling functions using the uWSGI RPC stack@signal(n)
- Registers signals for the uWSGI signal framework
Be sure to read the uWSGI decorator docs here
Wrapping up
This guide was just to introduce you to some of the useful decorators available in uWSGI and I highly recommend you check out the documentation, have a play around and do some testing for yourself.
Also, you may want to check out this awesome package/repo for working with many of the uWSGI tasks:
Last modified · 10 Apr 2019 Reference : https://pythonise.com/series/learning-flask/exploring-uwsgi-decorators
Last updated