SQLAlchemy, Cherrypy and iSeries

Recently I have been working on an internal website that uses Cherrypy and pyodbc to access an iSeries server. I went through the steps of creating a tool and engine plugin in order to keep persistent connections rather than connecting on each request. Although it works just fine, I found that I was adding things like wrapping a pyodbc cursor/connection in order to also return a list of dictionaries, my own row like structure. I knew there had to be a better way.

As I got further into the project, I began looking at using several other databases with this application or set of applications (the website hosts services for various programs to access ERP). For some programs that connect I am using sqlite to cache data locally in the event that the server information is not available. SQLite has its own row type in order to access rows in a row['COL'] syntax. As I look at potentially accessing a Postgres database as well, why not use the features of sqlalchemy to try to unify the Cherrypy side (pooling, row object, etc) so my iSeries works similar to the other databases I'm using.

SQLAlchemy Dialect

The first thing I noted was that there was no dialect that I could use to just pass a sql statement through to pyodbc (or at least I couldn't find a way). I did find ibm-db but that is python 2 only. I currently don't need the expression language or ORM included with sqlalchemy (although I may later) so I began to write a very simple dialect that I could expand on if needed but for now would allow passing the statement through to pyodbc. It will allow using the create_engine syntax so that an sqlalchemy engine will manage the connection pool and give me sqlalchemy resultset object when executing a sql statement.

I ended up creating this very short package using the provided dialects as examples of how to structure it. It took three simple bits of code which I placed into a package called sqlalchemy_iseries ...

base.py

""" An iSeries dialect. """
from sqlalchemy.engine import default

class ISeriesDialect(default.DefaultDialect):
    """ Specific dialect for iSeries """
    name = 'iseries'
    max_identifier_length = 128
    schema_name = "qgpl"

pyodbc.py

""" An iSeries + pyodbc dialect. """
from .base import ISeriesDialect
from sqlalchemy.connectors.pyodbc import PyODBCConnector


class ISeriesDialect_pyodbc(PyODBCConnector, ISeriesDialect):
    pyodbc_driver_name = 'iSeries Access ODBC Driver'

    def _check_unicode_returns(self, connection):
        return False

dialect = ISeriesDialect_pyodbc

__init__.py

""" sqlalchemy-iseries dialect (using pyodbc) """
__version__ = '0.1'

from sqlalchemy.dialects import registry
from . import pyodbc

dialect = pyodbc.dialect

registry.register("iseries.pyodbc", "sqlalchemy_iseries", "dialect")

Cherrypy Engine Plugin and Tool

Having a package to use sqlalchemy to connect to the iSeries was the first step, now that I could use the sqlalchemy engine like a pyodbc cursor I needed to recreate my Cherrypy plugin and tool.

database.py:

import cherrypy
from cherrypy import request
from cherrypy.process.plugins import SimplePlugin
import sqlalchemy


class SA_iSeriesPlugin(SimplePlugin):
    """ SQLAlchemy iSeries plugin. """
    def start(self):
        """ Create engine on startup event. """
        self.sa_engine = sqlalchemy.create_engine(
                "iseries+pyodbc://{user}:{pwd}@{host}/{db}?System={host}"
                .format(
                    user='myuser', pwd='mypass',
                    host='myhost', db='mydb'),
                connect_args={'autocommit': True})

    def stop(self):
        """ Close engine on stop event. """
        if self.sa_engine:
            self.sa_engine.dispose()
            self.sa_engine = None

cherrypy.engine.iseries = SA_iSeriesPlugin(cherrypy.engine)
cherrypy.engine.iseries.subscribe()


class SA_iSeriesTool(cherrypy.Tool):
    """ Get a connection from sqlalchemy engine and attach to request. """
    def __init__(self):
        cherrypy.Tool.__init__(self, 'before_handler',
                self._get_connect,
                priority=10)

    def _setup(self):
        """ Extend the setup to close automatically. """
        cherrypy.Tool._setup(self)
        cherrypy.request.hooks.attach('before_finalize',
                self._close_connect,
                priority=80)

    def _get_connect(self):
        """ Setup a cursor on the cherrypy.request.cursor point. """
        if(not hasattr(request, "iseries")):
            request.iseries = cherrypy.engine.iseries.sa_engine.connect()

    def _close_connect(self):
        """ Check to make sure the cursor is returned to the pool. """
        if(hasattr(request, "iseries")):
            request.iseries.close()
            del(request.iseries)
cherrypy.tools.iseries = SA_iSeriesTool()

Putting it all together

Now that all the pieces are in place, all that needs done is importing the modules when the server starts. From then on out the sqlalchemy engine will be available on the cherrypy request object for access as "iseries".

import cherrypy
import sqlalchemy_iseries
import database.py

# This will now work in a request handler...
cherrypy.request.iseries.execute("SELECT 1 FROM sysibm.sysdummy1")

There were a couple things that tripped me up when converting from using pyodbc to sqlalchemy.

  • I was previously wrapping the cursor.fetchone() and cursor.fetchall() methods for pyodbc in order to get dictionary access to each row. This had the effect of leaving me with mutable objects. ResultProxy objects in sqlalchemy are not mutable but by adding a method onto my base model, I had an option to get the same type of mutable structure:
def result_to_dict(self, resultset):
    """ Convert a list of ResultProxy to list of dicts. """
    return [dict(row) for row in resultset] if resultset else []
  • The second was that my Cherrypy tool used to return a cursor so a statement could be executed and the fetchXXX() would be called on the cursor. To address this, statements are now executed on the engine like this:
# Use the engine
rs = self.iseries.execute(sqlstmt)
result = rs.fetchall()

# or the shorter:
result = self.iseries.execute(sqlstmt).fetchall()