More ISAPI-WSGI and TurboGears

July 27, 2008 at 09:26 PM | categories: Programming | View Comments

Apparently my last post was not sufficiently detailed for some people. Well, okay, so far only one person, but I am sure I will eventually get a deluge of comments asking for clarification, so I decided to beat them to it :)

The best place to start is at the beginning. IIS exposes a programming interface through DLLs that can be loaded into IIS called ISAPI. There are two flavours of ISAPI, extensions and filters. Filters operate against every request, while extensions target particular file types. Often filters are used to implement things like URL rewriters and gzip encoders, while extensions are used to add new file type handlers like php and asp. Extensions can also handle .* files, which is special IIS lingo for "send all requests to this extension".

The first problem we have is that we need to create a DLL that is loadable by IIS and exposes the expected interface. Luckily Python for Windows Extensions provides a method for doing just that. As part of the distribution, you get PyISAPI_loader.dll which can be found in the isapi module under site packages. This DLL can be copied out into your work folder and renamed with a leading underscore, something like _tgload.dll. When added to your IIS web site and loaded, it will embed python into IIS and load a python file that is named like the DLL but without the underscore (

You can program for either ISAPI filters or extensions with this DLL as the interface that IIS expects does not conflict and is dictated by how you load the DLL. In our case we are creating an extension, so we add the DLL via the application settings section of the tab that might be named one of "virtual directory", "home directory" or just plain "directory". If you haven't already done so you will need to click the "Create" button. Click the "Configuration..." button to open the "Application Configuration" dialog. Under the mappings tab, remove all of the existing application mappings and add a new one. The executable should point to the dll from above, which does not need to and should not exist in the directory that would normally be served by IIS. Set the extension to ".*" in order to catch all URLs, select "All Verbs" and uncheck both "Script Engine" and "Check that file exists".

Programming an ISAPI extension in Python is essentially the same as in C because the isapi module that ships with Python for Windows Extensions does an excellent job of emulating the native interface. Microsoft's documentation applies equally well to Python as it does to C. There are a couple of minor differences which I will document here as well as a couple of tips that will hopefully keep you from pulling your hair out when something goes wrong. First, add the following lines to the top of your file:

import sys
if hasattr(sys, "isapidllhandle"):
    import win32traceutil

This will detect when the file is loaded as an ISAPI extension or filter DLL and redirect stdout and stderr to the win32traceutil output collector. You can run the trace utility by executing python -m win32traceutil in order to see any output from your DLL, including uncaught exception backtraces.

You also need to export the __ExtensionFactory__() function, which returns an object that exposes the GetExtensionVersion, HttpExtensionProc and TerminateExtension methods that operate as described in the Microsoft documentation with the exception that the first variable passed will be self.

Luckily you don't need to worry about the details of how all this works, because ISAPI-WSGI provides this object for you. It translates the ISAPI interface into the Python WSGI interface. If you haven't heard of it before, WSGI is THE standard for connecting Python applications to web servers in all their forms. At this point all of the Python Frameworks talk WSGI so it is a pretty good bet for being able to connect to a Python Web app.

ISAPI-WSGI provides 2 flavours of ISAPI interface objects, ISAPISimpleHandler and ISAPIThreadPoolHandler. The ISAPISimpleHandler can only handle one request at a time, while the ISAPIThreadPoolHandler does not block IIS and offloads the handling of the URL onto a pool of threads that call back into IIS when there is data to transmit back to the client. The exposed interface is identical, so you can go ahead and use whatever your are comfortable with.

Okay so we are going to be returning an instance of ISAPISimpleHandler or ISAPIThreadPoolHandler from our module's __ExtensionFactory__() function. All we need to do is instantiate our choice of object and pass it our WSGI App interface. For TurboGears, all of our HTTP requests are handled by CherryPy so we need to dig into how CherryPy exposes a WSGI App. That is what my previous article is supposed to explain.

All of this should work without a hitch on 32bit Windows, but 64bit opens up a whole big set of problems. You cannot load 32bit DLLs into 64bit applications. There is no official build of the Python for Windows Extensions. I was able to find an old build for Python 2.4 and a current (official) build for Python 2.6, but I found no version for Python 2.5.

Now this does not mean you are dead in the water, IIS 6 (Windows Server 2003 and Windows XP) will let you choose to run IIS in 32bit mode, but everything must run in 32 bit mode. If you want to do this, search for ASP.NET 1.x and Windows x64. If you are sharing the server with other apps that you want to be running in 64bit, like ASP or ASP.NET 2+, you will need to find an alternative deployment method (I am going with TurboGears and IIS behind an Apache reverse proxy). If you are on IIS 7 (Windows Server 2008 and Windows Vista) you can configure individual application pools to run in either 64bit or 32bit mode. I don't have access to an IIS 7 server to try it out. Anyone who can should report back in the comments.

Hopefully that fills the gaps that I left in the last article. I'll leave it to someone else to distill this into newbie friendly documentation that can go on the TurboGears or ISAPI-WSGI Web sites.

Read and Post Comments

TurboGears + ISAPI-WSGI + IIS

July 10, 2008 at 12:18 PM | categories: Programming | View Comments

On June 19, 2008, Louis wrote to the isapi_wsgi-dev Google Group asking about how to get CherryPy to work with ISAPI-WSGI. Since ISAPI-WSGI was how I was going to connect my Turbo Gears app up to IIS, I recording what I did here for posterity.

The first caveat to this is that you will not get this to work with IIS in 64 bit mode unless you can get a build of PyWin32 for x64. If you running a 64bit Windows architecture you will need to set IIS to 32bit mode and only run 32bit ISAPI dlls. On IIS6 (Windows 2003/XP) this will mean you can only run 32bit DLLs. If you are using IIS7 (Windows 2008), you will, apparently, be able to have 64bit and 32bit process pools. This situation could get better in Python 2.6 since PyWin32 seems to have a x64 build for the 2.6 alphas.

Okay, on to the explanation. The first thing to do in your DLL Python file, is to include these lines:

import sys
if hasattr(sys, "isapidllhandle"):
    import win32traceutil

This checks that we are running as an ISAPI DLL and imports win32traceutil, which redirects stdin and stdout so that you can view them with the win32traceutil message collector (just run the module on its own: python -m win32traceutil). If things go wrong, at least you will have a way of knowing what is going wrong.

My __ExtensionFactory__() looks like this:

def __ExtensionFactory__():
    # Do some pre import setup
    import os

    app_dir = os.path.join(os.path.dirname(__file__), '..', 'app')
    app_dir = os.path.normpath(app_dir)

    # import my app creator
    import wsgi_myapp
    return ISAPIThreadPoolHandler(wsgi_myapp.wsgiApp)

In the do some pre-import setup, we add the application dir to the import path and change directories so that the current working dir is where the TurboGears app is (

Then I have the module that sets up the TurboGears/CherryPy WSGI app. I started with the TurboGears file and made modifications that were appropriate for making it work properly with WSGI:

from turbogears import config, update_config
import cherrypy
cherrypy.lowercase_api = True

# first look on the command line for a desired config file,
# if it's not on the command line, then
# look for in this directory. If it's not there, this script is
# probably installed

from yourapp.controllers import Root

cherrypy.root = Root()
cherrypy.server.start(initOnly=True, serverClass=None)

from cherrypy._cpwsgi import wsgiApp

The CherryPy critical components (for CherryPy 2.2) are:

# Grab your Root object
from yourapp.controllers import Root

# set the root object and initialize the server
cherrypy.root = Root()
cherrypy.server.start(initOnly=True, serverClass=None)

# expose the wsgiApp to be imported by the file above.
from cherrypy._cpwsgi import wsgiApp

Louis is using CherryPy 3 which changes things slightly from what I have done. CherryPy 3 is all WSGI all the time so we need to do less fiddling. Here is what he had for __ExtensionFactory__():

def __ExtensionFactory__():
        #cpwsgiapp = cherrypy.Application(HelloWorld(),'F:\python-work')
        app = cherrypy.tree.mount(HelloWorld())
        #wsgi_apps = [('/blog', blog), ('/forum', forum)]

        #server = wsgiserver.CherryPyWSGIServer(('localhost', 8080), HelloWorld(), server_name='localhost')

        return isapi_wsgi.ISAPISimpleHandler(app)
        # This ensures that any left-over threads are stopped as well.

I left in his extra comments because they show how he got to where he is. This looks like the basic start a server code shown in the tutorial. I think the key issues here are the cherrypy.engine calls. The CherryPy WSGI Wiki page does not mention the engine at all. I would guess that what he actually needs is:

def __ExtensionFactory__():
    app = cherrypy.tree.mount(HelloWorld())
    return isapi_wsgi.ISAPISimpleHandler(app)

That said, I have not used CherryPy 3, so I have no direct experience.

Update: Don't use autoreload with ISAPI-WSGI. It won't work and if you don't use the win32traceutil you won't know why.

Also, I have added more background detail on how to use ISAPI with Python and the ISAPI-WSGI package here.

Read and Post Comments

The Cheesecake Service

February 09, 2007 at 05:28 PM | categories: Programming | View Comments

Part of Michał Kwiatkowski's Summer of Code project was the to create the Cheesecake service. Micheal explains how it all works in his announcement of the cheesecake service.

There was a lot of complaint about the concept of Cheesecake when it first appeared on the scene, but I think the concept is really valuable. Check and see if your Pyhton package is easy_install'able.

To all the people who continued on with Cheesecake in the face of criticism: Thanks!

This announcement found via this post.

Read and Post Comments

HOWTO Cross Compile Python 2.5

October 06, 2006 at 04:04 PM | categories: Computers, Programming | View Comments

Recently I needed to compile Python for an embedded target. I used version 2.5 though it looks like that choice may have made it harder for me. I used 2.5 because I didn't want to have to figure out how to cross compile the cElementTree extension. Unfortunately I still ended up having to figure out how to get PyXML to build for my target. Fortunately I did get everything to work with a bit of fiddling. For posterity, here are some notes about what I did and what problems I had.

I started with Klaus Reimer's ARM cross-compiling howto and made made some updates required by changes between Python 2.2 and 2.5.

The changes I made are captured in an updated patch to apply against the 2.5 source tree. The changes made the are to disable rules that cause configure failures when cross compiling because they look for files on the target system or require a test program to be compiled and run. The other changes I added over the patch from Klaus Reimer are for a specific issue I had where md5 and sha hash algorithms were not built because which builds the modules is not cross compile aware and detected the OpenSSL libraries on my build machine rather than the target.

You can apply the patch with the following command:

~/tmp/Python2.5$ patch -p1 < ../Python2.5_xcompile.patch

I next generated a "host" python as in Reimer's instructions:

~/tmp/Python2.5$ ./configure
~/tmp/Python2.5$ make python Parser/pgen
~/tmp/Python2.5$ mv python hostpython
~/tmp/Python2.5$ mv Parser/pgen Parser/hostpgen
~/tmp/Python2.5$ make distclean

I exported necessary variables like CC, CXX, AR, RANLIB, LD, CFLAGS, CXXFLAGS, LDFLAGS for my target. The key is for these to point to the libraries you are going to use.

To build and "install" python for the my target I used the following commands:

~/tmp/Python2.5$ ./configure --host=ppc-linux --build=i686-pc-linux-gnu --prefix=/python
~/tmp/Python2.5$ make EXTRA_CFLAGS="$CFLAGS" HOSTPYTHON=./hostpython HOSTPGEN=./Parser/hostpgen BLDSHARED='ppc-linux-gcc -shared' CROSS_COMPILE=yes
~/tmp/Python2.5$ make install EXTRA_CFLAGS="$CFLAGS" HOSTPYTHON=./hostpython BLDSHARED='ppc-linux-gcc -shared' CROSS_COMPILE=yes prefix=/home/lambacck/tmp/dest/python

Note that I needed to use EXTRA_CFLAGS to add my target specific CFLAGS because for some reason the Python configure process does not honor the ones I provided while doing configure. The LDFLAGS variable, however, did work.

Also notice that I set the prefix in the configure step to be /python and the set another prefix in the make install step. The prefix in the configure step is where we are putting python on the file system on the target. The prefix in the install step is where we are putting all of the bits to be able to package them up to be ready to send to the target.

After all that I have a (mostly) functional python environment on my target, but I still needed to get PyXML built. I downloaded the latest distribution, modified so that expat was forced into little-endian mode and ran the following commands:

~/tmp/PyXML-0.8.4$ LDSHARED='ppc-linux-gcc -shared' CC="${CC}" OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ~/tmp/Python-2.5/hostpython -E build
~/tmp/PyXML-0.8.4$ LDSHARED='ppc-linux-gcc -shared' CC="${CC}" OPT='-DNDEBUG -g -O3 -Wall -Wstrict-prototypes' ~/tmp/cross_python/Python-2.5/hostpython -E install --prefix=$HOME/tmp/dest/python/ --install-scripts=/home/lambacck/tmp/dest/python/bin

It looks like Distutils and the Python build process in general could use some cross compile support. I think this is currently far harder than it has to be.

Update Oct 10, 2005: I would like to point out that on the system that I the executable used to compile hostpython was called gcc and I then updated paths so that a gcc targeted at ppc called gcc was first in my path. Therefore in both cases the compiler was gcc and not gcc for the hostpython and ppc-linux-gcc for the target python.

Update Dec 6, 2006: It looks like double dashes have been turned into single dashes by Wordpress. Specifically the host, build, prefix and install-scripts arguments to configure and should have double dashes.

Update Jun 30, 2010: Translation to Portuguese. Commands converted to use arm processor. Thanks Anuma.

Read and Post Comments

Why I Switched to Ubuntu

October 05, 2006 at 11:59 PM | categories: Computers, Programming | View Comments

In a comment to my last post, Installing Ubuntu on a drive with an existing RAID and LVM, Jay Parlar asks, "Why the switch to Ubuntu?" Well Jay, I'm glad you asked.

To answer that, I have to go back to the beginning, back to my first experiences with Linux. If you just want the executive summary without all the detail and witty prose, just skip to the end.

In my first year of University I had heard about this cool new OS called Linux (It was already 5 years old at that point, but what did I know). I of course didn't have a clue what it was or what I was getting myself in for but I decided the best way to understand it was to install it and take it for a spin. At the time I didn't quite get why people were giving it away for free, but after looking around at Red Hat, Slackware Debian and a couple others, I decided that the right one must be Debian.

Over the course of a week I proceeded to download a ridiculous amount of Debian 1.3 (bo) over dial up. Then I went through a not very friendly install process which eventually dumped me off a login prompt. After logging in I of course had no idea what to do because I had never touched any kind of Unix before. I played around with that with a bit of help from a Linux tutorial for DOS users, but generally that attempt was not very successful.

My adventures with Linux picked up when I took the McMaster University IEEE Student Branch Unix crash course in my second year of University. There, in one easy session, I got bumped from my computer and forced to be the typist for the demo computer attached to the projector. The second day the course was offered (two days later) I assisted in teaching the course by being one of the people who helped the students who were lost. The next semester I co-wrote the manual and taught the course.

For a long time I disto hopped. I was taking in all the cool new graphical features and installers and things that were coming out at the time and discovered that for the most part it was hard to get any new software on the system unless it got done during the initial installation. The exception to that was Debian. It seems that Debian figured out a lot of things between when I first installed them and when I eventually allocated them permanent space on my hard drive. In particular, they invented APT so that you could easily install and remove of software. Seemingly every program you could possibly want to use could be installed with the new APT system. They also seemed to have users that knew what they were doing.

Even though I mostly used Debian, I continued to use Windows and distro hope until after third year. When I went on internship, I gave up Windows went entirely Debian. I was satisfied with Debian for a fairly long time because I was using the unstable version. I decided that I had to do that because otherwise Debian was just too far behind on all the cool toys. Eventually I found myself being too often annoyed that unstable really meant unstable or that it took a really long time for the really cool new stuff to get packaged up and included.

When I started my Masters, I decided to switch to Gentoo. The source based distribution attracted me because it promised to be more agile about accepting packages and gave the opportunity to optimize what got compiled in or didn't. I also believed that machine specific optimizations were going to make a speed difference, something I never found a lot of evidence for.

I was already starting to be a bit dissatisfied with Gentoo before I moved to Ottawa, but the new living arrangements were the nail in the coffin. Suddenly my box was headless, and I had limited time to maintain my box because only one of Kate or myself could be using a computer at a time. I was also more interested in doing things with my computer than maintaining it.

I had had heard good things about Ubuntu, which is based on Debian. At work Pedro, a long time Debian user, said that he was using Ubuntu for desktop stuff and was really happy with it. Since I didn't have anything installed at work, I decided to take the Ubuntu plunge and was happy to find that it took little work to get it configured to my liking. It had the familiar Debian flavour with a new polished presentation.

It took me a couple months to get around to it but since I already wanted to redo my box at home I decided to go with Ubuntu. Thus far I have not regretted the choice and am even excited about some of the new features in the upcoming Edgy Eft which include Gnome 2.16, Firefox 2 and not having to muck with /etc/apt/sources.list to get Vim 7.

So far I have not been disappointed with my choice to move to Ubuntu. I think it provides all the features I liked about Debian and Gentoo, but with guaranteed regular updates and a usable desktop out of the box. And yes, I am using Gnome. It doesn't get in my way and at work, where I actually have a Linux desktop, I have enough processor and memory that I don't care that its a bit of a hog. I mostly only use Gnome Term anyways. At home I mostly ssh into my box. I use x11vnc to export a GUI when I need it, but I have been doing most of my graphical work on Windows (Firefox does work there ya know).

Ok, so lots of prose, mabey not so witty. Here is the promised executive summary:

  • I had a lot of experience with unmanageable distros
  • I found Debian and fell in love with its manageability
  • I wanted to be more cutting edge, especially in desktopy things so I switched to Gentoo
  • Gentoo was too much work and I wanted out
  • Ubuntu gives me the desktopy things I want while still giving me the Debian Manageability
Read and Post Comments