AWS cuts data transfer rates: Pricing comparison update

by on February 2, 2010
in AWS, EC2

AWS just cut their outbound data transfer rates from $0.17 to $0.15 per GB/month up until 10TB. I have updated my previous comparison between Go Daddy and AWS with the latest numbers.

Updated AWS/Go Daddy dedicated server cost comparison

by on August 28, 2009
in AWS, EC2

UPDATE 1: Corrected the bandwidth calculation in the formulas for AWS.

UPDATE 2: Added new data for the February 2010 AWS data transfer price reduction.

In a previous posting I did a cost comparison of a reserved Amazon Web Services EC2 instance and a comparable dedicated server from Go Daddy. Amazon recently announced a set of price cuts for reserved instances, so an updated comparison is in order.

The server configurations I’m comparing are the same as last time:

Go Daddy AWS AWS (new)
Processor Core 2 Duo 2.66 GHz 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) [1 unit equals a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor] 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) [1 unit equals a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor]
Hard Drive(s) Dual 300GB drives 850GB of instance storage 850GB of instance storage
Memory 3.2GB 7.5GB 7.5GB
1-year plan, w/o bandwidth $2,483.46 $2,351.20 $1,962.40
3-year plan, w/o bandwidth $6,622.56 $5,153.60 $4,557.20



I have added an extra column for the new EC2 reserved instance pricing scheme. Notably, the prices for the Go Daddy options haven’t changed in the last six months. For the 1- and 3-year AWS plans, the total costs have dropped by $390 (17%) and $600 (12%), bandwidth excluded.

(For the full discussion, refer to the previous posting.)

When including bandwidth use, the updated table looks as follows:

1GB/mth 20GB/mth 100GB/mth 400GB/mth 800GB/mth
Go Daddy, 1-year plan $2,483.46 $2,483.46 $2,483.46 $2,483.46 $2,723.34
AWS, 1-year plan $2,353.24 $2,392.00 $2,555.20 $3,167.20 $3,983.20
AWS, 1-year plan (Aug 2009) $1,963.24 $2,002.00 $2,165.20 $2,777.20 $3,593.20
AWS, 1-year plan (Feb 2010) $1,963.00 $1,997.20 $2,141.20 $2,681.20 $3,401.20
           
Go Daddy, 3-year plan $6,622.56 $6,622.56 $6,622.56 $6,622.56 $7,342.20
AWS, 3-year plan $5,159.72 $5,276.00 $5,765.60 $7,601.60 $10,049.60
AWS, 3-year plan (Aug 2009) $4,559.72 $4,676.00 $5,165.60 $7,001.60 $9,449.60
AWS, 3-year plan (Feb 2010) $4,559.00 $4,661.60 $5,093.60 $6,713.60 $8,873.60


Updated comparison between AWS and Go Daddy pricing plans

With the old pricing, the AWS option was preferable unless bandwidth exceeded 100GB per month (for the 1-year plan) or 250GB per month (for the 3-year plan). After the August 2009 price cuts, AWS became an even more competitive option, although one that still falls behind for high bandwidth scenarios.

In February 2010, the price for outgoing data traffic dropped from $0.17 to $0.15. With the 3-year plan, AWS now matches Go Daddy up until almost 400GB per month.

Addendum: Some of the background data used in this posting:

  • Go Daddy quotes from August 28, 2009
  • Go Daddy 3-year plan cost: 2-year plan quote * 1.5
  • AWS 1-year plan cost (< Aug 2009): $1,300 + (24 * 365 * 1 * $0.12) + (GB/mth * $0.17 * 12)
  • AWS 1-year plan cost (Aug 2009): $910 + (24 * 365 * 1 * $0.12) + (GB/mth * $0.17 * 12)
  • AWS 1-year plan cost (Feb 2010): $910 + (24 * 365 * 1 * $0.12) + (GB/mth * $0.15 * 12)
  • AWS 3-year plan cost (< Aug 2009): $2,000 + (24 * 365 * 3 * $0.12) + (GB/mth * $0.17 * 36)
  • AWS 3-year plan cost (Aug 2009): $1,400 + (24 * 365 * 3 * $0.12) + (GB/mth * $0.17 * 36)
  • AWS 3-year plan cost (Feb 2010): $1,400 + (24 * 365 * 3 * $0.12) + (GB/mth * $0.15 * 36)
  • EC2 pricing information
  • Go Daddy dedicated server pricing information

How to compile SimpleParse 2.1.0a1 for Python 2.6 on Windows Vista

SimpleParse is a fast Python single-pass parser generator that I use regularly. When I finally made the move onto Python 2.6 it turned out that there is no pre-compiled package for 2.6 on Windows. So, here is my procedure for compiling the source package on Windows Vista.

1. Install Cygwin if you don’t already have it on your system, and make sure that the version of Python you are installing SimpleParse for is on either the system or the Cygwin path.

2. Download and install Microsoft Visual C++ 2008 Express Edition. You should ensure that you have the latest Vista service packs installed before attempting this. If the installer quits on you then just reboot the computer and try again. Without this installed, you wil get an ‘Unable to find vcvarsall.bat’ error.

3. Download and unpack the SimpleParse 2.1.0a1 source. Using the Cygwin shell, place yourself in the root source directory.

4. If we try to run python setup.py install at this point, the Visual C++ compiler will complain:

stt/TextTools/mxTextTools/mxTextTools.c(149) : error C2133:
'mxTextSearch_Methods' : unknown size
stt/TextTools/mxTextTools/mxTextTools.c(920) : error C2133:
'mxCharSet_Methods': unknown size
stt/TextTools/mxTextTools/mxTextTools.c(2103) : error C2133:
'mxTagTable_Methods' : unknown size
error: command '&amp;amp;quot;C:\Program Files\Microsoft Visual Studio 9.0\VC\BIN\cl.exe&amp;amp;quot;'
failed with exit status 2

We have to add the following lines to stt/TextTools/mxTextTools/mxTextTools.c, starting at line 148 (before staticforward is used for the first time):

#ifdef _MSC_VER
#define staticforward extern
#endif

5. with is a Python 2.6 keyword, meaning it can’t be used as a variable, as is the case in the SimpleParse source code. So, we have to replace it with something else:

$ sed -r 's/with/with_t/g' &amp;amp;lt; stt/TextTools/TextTools.py &amp;amp;gt; tmp.txt
$ cp tmp.txt stt/TextTools/TextTools.py

6. Finally, run python setup.py install as usual.

On the sadness of nouns

“Writing, Jen thought, seemed like a very sad pursuit. Like painting, but worse. At least paintings had color. Writing, though, was just black marks on paper, standing in for people and objects and events that could never be seen or felt. It seemed pathetic in a way. Nouns were the saddest words of all, trying so hard to summon real objects to life.”

Jon Raymond, “Words and Things” (Livability)

When EndNote X2 fails

The connection between EndNote X2 and Microsoft Word 2007 seems to get corrupted on a regular basis on my Vista setup. Based on hours of web searching and trial and error, here is a short summary of ways of getting it working again. Use these when you get error messages such as ‘server threw an exception’, ‘server execution failed’, and ‘invalid class string’.

In prioritized order:

  • Run EndNote as an administrator (for Windows Vista).
  • Reset EndNote defaults (“Edit -> Preferences -> EndNote defaults”). This seems to work most of the time. Make sure to close Word first. After having reset EndNote, close it, and then try launching it from Word.
  • The library may be corrupted. Try running “Tools -> Recover Library”.
  • If all else fails, reinstall EndNote

There are other possible problems, especially when upgrading from older versions, but these actions usually work for me.

(For Norwegian readers: If you are using the Norwegian version of EndNote, the error messages will be ‘ugyldig klassestreng’ or ‘serverutføringen mislyktes’.)

Why Amazon Web Services just became a competitive web hosting provider

by on March 12, 2009
in AWS, EC2

UPDATE 1: There is now an updated version of this posting. The new version incorporates the August 2009 AWS reserved instance pricing changes.

UPDATE 2: Corrected the bandwidth calculation in the formulas for AWS.

Amazon Web Services just announced a new reserved instances pricing plan. In short, this plan allows you to reserve EC2 instances for a 1 to 3 year period by paying a one-time reservation fee. The hourly rate for reserved instances is considerably lower than for regular spot-market instances. For comparisons sake, a large standard on-demand instance will set you back $0.40 per hour, while the large standard reserved instance is only $0.12 per hour.

With the old pricing scheme, hosting a web service on AWS instead of on a dedicated server was not a very cost-competitive option, at least not for resource-intensive applications. For my web site, Eventseer, I require at least a large standard instance—at $0.40 per hour for 24/7 operation (bandwidth costs not included), this turned out way too expensive compared with offerings from traditional dedicated server providers.

To see if the new pricing scheme fares any better, I have compared the cost of an AWS EC2 reserved large instance with a similar dedicated server from Go Daddy:

Go Daddy AWS
Processor Core 2 Duo 2.66 GHz 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each) [1 unit equals a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor]
Hard Drive(s) Dual 300GB drives 850GB of instance storage
Memory 3.2GB 7.5GB
1-year plan, w/o bandwidth $2,483.46 $2,351.20
3-year plan, w/o bandwidth $6,622.56 $5,153.60

I’m not sure how the 4 EC2 compute units compete with a dedicated Core 2 Duo 2.66 GHz. This probably also depends on the nature of your application. Note that the AWS solution has twice the amount of memory. From what I can see, you can not get a Go Daddy dedicated server with more than 3.2GB of memory, while AWS offers up to 15GB on the extra large instances.

When disregarding bandwidth costs, AWS suddenly makes a lot of sense. As bandwidth use is highly application-dependent, let’s consider a few different bandwidth use scenarios:

1GB/mth 20GB/mth 100GB/mth 400GB/mth 800GB/mth
Go Daddy, 1-year plan $2,483.46 $2,483.46 $2,483.46 $2,483.46 $2,723.34
AWS, 1-year plan $2,353.24 $2,392.00 $2,555.20 $3,167.20 $3,983.20
           
Go Daddy, 3-year plan $6,622.56 $6,622.56 $6,622.56 $6,622.56 $7,342.20
AWS, 3-year plan $5,159.72 $5,276.00 $5,765.60 $7,601.60 $10,049.60


Comparison between AWS and Go Daddy pricing plans

Conclusion: With a 1-year plan, AWS is the cheapest option until you reach about 100GB of external bandwidth per month. With the 3-year plan, the AWS bandwidth cost isn’t a problem until about 250GB per month. (Bandwidth is “free” with Go Daddy dedicated servers up until 500GB per month; after that it’s an extra $19.99 per month until you reach 1,000GB).

Considering that the AWS solution gets you twice the amount of RAM, AWS suddenly seems a very viable option even for web service hosting—as long as you’re not expecting extreme
amounts of traffic. However, once you get popular the outgoing data transfer pricing will take its toll.

Addendum: Some of the background data used in this posting:

  • Go Daddy quotes from March 12, 2009
  • Formula for Go Daddy 3-year plan cost: 2-year plan quote * 1.5
  • Formula for AWS 1-year plan cost: $1,300 + (24 * 365 * 1 * $0.12) + (GB/mth * $0.17 * 12)
  • Formula for AWS 3-year plan cost: $2,000 + (24 * 365 * 3 * $0.12) + (GB/mth * $0.17 * 36)
  • EC2 pricing information
  • Go Daddy dedicated server pricing information

Running pytst 1.15 on a 64-bit platform

by on January 25, 2009
in Python, pytst

Update: The latest version, 1.17, compiles on 64-bit platforms out of the box, so the patch below is no longer necessary.

Nicolas Lehuen’s pytst is a C++ ternary search tree implementation with a Python interface. It’s an excellent tool—and it is also really, really fast.

Unfortunately version 1.15 doesn’t compile on 64-bit platforms, giving the following error messages:

pythonTST.h:178: error: cannot convert 'int*' to 'Py_ssize_t*' for argument '3'
to 'int PyString_AsStringAndSize(PyObject*, char**, Py_ssize_t*)'
tst_wrap.cxx: In function 'PyObject* _wrap__TST_walk__SWIG_1(PyObject*, int, PyO
bject**)':
tst_wrap.cxx:3175: error: cannot convert 'int*' to 'Py_ssize_t*' for argument '3
' to 'int PyString_AsStringAndSize(PyObject*, char**, Py_ssize_t*)'
tst_wrap.cxx: In function 'PyObject* _wrap__TST_close_match(PyObject*, PyObject*
)':
tst_wrap.cxx:3250: error: cannot convert 'int*' to 'Py_ssize_t*' for argument '3
' to 'int PyString_AsStringAndSize(PyObject*, char**, Py_ssize_t*)'
tst_wrap.cxx: In function 'PyObject* _wrap__TST_prefix_match(PyObject*, PyObject
*)':
[...and so on...]

Until Nicolas releases an updated version, here is the quick fix:

cp pythonTST.h pythonTST.h.orig
cp tst_wrap.cxx tst_wrap.cxx.orig
sed -r 's/int size/Py_ssize_t size/' < tst_wrap.cxx.orig > tst_wrap.cxx
sed -r 's/int length/Py_ssize_t length/' < pythonTST.h.orig > tmpfile
sed -r 's/sizeof\(int\)/sizeof(long)/' < tmpfile > pythonTST.h

Run these commands from the pytst source directory and you should be all set. I’m not sure if this a fully satisfactory solution, but at least this will get the test suite running again.

Dostoevsky on the dangers of science

“He was devoured by the deepest and most insatiable passion, which absorbs a man’s whole life and does not, for beings like Ordynov, provide any niche in the domain of practical daily activity. This passion was science. Meanwhile it was consuming his youth, marring his rest at night with its slow, intoxicating poison, robbing him of wholesome food and of fresh air which never penetrated to his stifling corner. Yet, intoxicated by his passion, Ordynov refused to notice it. He was young and, so far, asked for nothing more. His passion made him a babe as regards external existence and totally incapable of forcing other people to stand aside when needful to make some sort of place for himself among them. Some clever people’s science is a capital in their hands; for Ordynov it was a weapon turned against himself.”

From “The Landlady” (1848)

3 lessons the individual investor can learn from JPMorgan Chase

by on September 29, 2008
in Finance, Investment

JPMorgan Chase recently picked up the remains of troubled Washington Mutual for a mere $1.9 billion—a deal described by some as buying the company “for nothing.”

Although the takeover carries a fair amount of risk for JPMorgan, the bank looks increasingly likely to emerge as a credit crisis survivor. They still have a “reasonably strong” balance sheet and have by and large managed to steer clear of a debacle of historical proportions.

How did this come about? Also, are there elements to this story that you as an individual investor can learn from?

Fortune recently published a comprehensive account of how JPMorgan mostly avoided the subprime debacle (“Jamie Dimon’s Swat Team”), which is well worth a read. Based on the article, I have tried to summarize the main principles that have so far kept JPMorgan as a company out of trouble.

These principles are, in my opinion, just as valid on a personal level when you, as an individual investor, are to decide whether or not a company stock is a good buy. So, I have also tried to relate each principle to general best practices of investment.

1. It’s all in the numbers

In early 2006, JPMorgan were, like everyone else, dealing in subprime CDOs. By the end of that year, the bank had dumped more or less all of their subprime mortgage holdings. What happened?

First of all, the numbers were no longer looking good.

JPMorgan has a strong tradition of data-mining every aspect of their business and continuously trying to figure out the story behind the numbers. What Jamie Dimon, the CEO, and his team saw was that the subprime market was way to risky for the profits it was generating. Data from their retail banking division showed that subprime loan payments were increasingly late. Moreover, their own data analysis indicated that the supposedly safe AAA ratings lavished upon CDO bonds were bogus.

The numbers were increasingly and consistently negative and in sharp contrast to the conventional wisdom on the subprime market. Trusting the data and its interpretation rather than the general opinion, JPMorgan left the market altogether.

Learning points:

When evaluating a company as an investment opportunity, you can rely on a barrage of opinion from a sea of sources with a multitude of motivations.

Or, you can go straight to the facts.

The numbers in quarterly reports or annual accounts don’t lie unless deliberately tampered with. If the balance sheet tells you that a company is heading for trouble, then that company is heading for trouble, no matter what anyone else might be saying.

  • Learn how to read and understand the balance sheet, the income statement and the cash flow statement. Once understood, they tell you more about the company than any financial advisor or industry analyst ever will.
  • Be diligent in your pursuit of data, both on the company, the sector, and the general state of the economy.
  • Read the numbers first and then make up your own interpretation. Other people’s interpretations are not gospel, but rather a challenge to your own interpretation.
  • The opinionated parts of a company’s annual report are mostly fluff and should be read as such. Read the annual report from the back. It’s not a crime to talk about a company in optimistic terms; manipulating the numbers is.

2. Investment is not about short-term profit

JPMorgan exited the subprime market while it was still a booming business. This took a lot of guts when other Wall Street firms were making a killing from subprime.

In the short term they lost ground to competitors by not jumping on the latest Street bandwagon. Their conservative stance and the effect it had on quarterly earnings must have generated immense pressure, both internally and externally. From 2005 to 2007, JPMorgan fell from third to sixth place in fixed-income underwriting. This is the sort of development that causes ruckus in board meetings.

Nonetheless: In the long term they prevailed.

Their decision to trust their analysis of subprime being too risky turned out to be a sensible one, even if this meant a very negative short-term impact on their balance sheets.

By focusing on core company values rather than pursuing immediate profit, JPMorgan emerged on top.

Learning points:

You can certainly make money from overhyped stocks whose valuation belies the true worth of their business. This, however, requires you to play the game of getting out before the bubble of irrational exuberance pops.

Timing the market is ultimately about luck. Luck is a property that is best reserved for the lottery rather than your savings.

Fashion does not imply quality. Just because everyone else is ecstatic about something—be it dot-com companies or the mullet—does not mean you should be as well. Only get with the crowd if there is a fundamentally sane reason for doing so. Dare to be different.

  • Be prepared to stick it out as long as the underlying fundamentals of your analysis does not change. Quality always prevails in the long term.
  • Even fundamentally good stocks go down if they are unpopular. This is the way the market rolls; don’t lose any sleep over it.
  • You will not be able to consistently time the ups and downs of the market, so don’t even try to. Learn to live with the fact that good stocks will sometimes go down for no good reason.
  • Don’t check your portfolio every five minutes. Apart from keeping you from getting any other work done, it will only lead you to perceive the market as more volatile than it really is. Think the market is too volatile? Just reduce your sampling frequency. Remember that you are in it for the long run.
  • Listen to other people but don’t let them make your decisions. Even if they have a compelling chain of arguments there are likely more conclusions that can be drawn from the same set of underlying facts. Your explanation of why to invest should always be your own.

3. Question and diversify

JPMorgan operating-committee meetings are described as “loud and unsubtle”. According to Bill Daley, head of corporate responsibility and former Secretary of Commerce, “[p]eople were challenging Jamie, debating him, telling him he was wrong. It was like nothing I’d seen in a Bill Clinton cabinet meeting, or anything I’d ever seen in business.”

This culture of allowing, encouraging and listening to dissent ultimately made it easier for JPMorgan to make the right decisions. Getting all facts and viewpoints on the table while continously questioning what they were doing was a major success factor.

Still, JPMorgan made their own share of mistakes.

In 2007 a short-term secured loans unit bought a $2 billion subprime CDO—upper management claims they never knew. Other billion-dollar write-offs had to be endured as well. Their principle of only taking risks when you are paid well for doing so is anything but perfect. It also remains to be seen how well timed their shotgun purchase of Washington Mutual turns out—among its assets are an estimated $30 billion’s worth of loans that have to be written down. However, on the whole they look to be emerging from the credit crisis as a much healthier company than their surviving competitors.

Learning points:

There is no such thing as a risk-free investment, so be prepared to accept losses. JPMorgan’s competitors put themselves in a position where some of them could not weather a downturn in one of their business segments. JPMorgan, on the other hand, were doing what they could to make sure their good moves outweighed their bad moves.

Making bold investment choices always carries the probability of failure. Use diversification as a cushion for when failure strikes.

In bicycle racing, one does not talk in terms of if a rider will take a tumble but rather about when. The same should apply to your investments.

Moreover, be prepared to continuously question your own judgment. The premises for earlier decisions will change, so be prepared to revert on them. If faith becomes a stronger motivation than reason for holding on to stock it’s probably time to let go.

  • Hedge your investments. No matter the soundness of your strategy or how diligently you stick to your principles, things will still go wrong from time to time. Don’t allow mishaps to take you down.
  • Be prepared to change your position. Things change, the world keeps turning, and so should you.
  • Don’t get emotional about a stock. Your favorite company might turn from making mostly good decisions to making mostly bad decisions. These things happen, so be prepared to get out even if this means taking a loss.
  • If your sole reason for hanging on to a stock is the belief of future recovery then you have already lost. Get rid of it, count your losses, and learn from the experience.

Hacking comments in Django 1.0

by on September 5, 2008
in Django, Python

The recent release of Django 1.0 included a full rewrite of the comments framework. Comments have been available in Django for a while but were never properly documented until now.

This article will show you how to adapt and extend the comments framework so that it fits the needs of your application. Why extend it? Well, mainly because the framework does what it says on the box—and nothing more. It allows you to attach comments to any Django object instance but for the rest of the business logic—e.g. regulating who can modify and delete comments—you are on your own.

Also, the current documentation does not cover all features so what I am writing here should hopefully fill a few gaps.

Prerequisites

You need to be familiar with Django. If you’re not, then have a look at the tutorial in the official documentation or alternatively at my previous article on how to get started with Django on Google App Engine.

How comments work

It’s really simple—just skim through the well-written documentation and you should pretty much be able to figure it out.

For example, to show a comment form for an instance of a model called my_model_instance, you just need two lines of template code:

{% load comments %}
{% render_comment_form for my_model_instance %}

The magic behind the comments framework lies in its use of generic model relations. This is a very powerful (and well-hidden) Django feature that allows your models to have generic foreign keys, meaning they can link to any other model. The comments framework uses this technique to ensure that comments can be attached to an arbitrary model in your application.

The scenario

I will be describing a real-life case from my company web site, Eventseer.net. Eventseer is an event tracker that helps researchers stay informed on upcoming conferences and workshops. It uses the comments framework for two different purposes.

Firstly, registered users can add comments to each event in our database. Secondly, all users can claim a personal profile page where they get what we call a whiteboard—which is simply a blogging application. Each entry on a whiteboard can be commented on by other registered users.

The problem

There are some limitations when it comes to adding comments on Eventseer. For example, only registered users are allowed to add comments. After a comment has been added, only the user who added it or an administrator are allowed to delete it.

These are fairly typical requirements—which are not supported out of the box in the comments framework. There is some support for using the built-in permissions system, but this will still not let you exercise fine-grained per user access control.

Moreover, the default comment templates are ugly as sin and will have to adapted to fit your application.

Step 1: Enabling comments

This is described well enough in the standard documentation. However, if we want to add extra functionality there are a couple of extra things to be done.

First, we add the comments framework to INSTALLED_APPS in settings.py:

# eventseer/settings.py

INSTALLED_APPS = (
    ...
    'django.contrib.comments',
    'eventseer.mod_comments',
    ...
)

Note that I also added an app called eventseer.mod_comments. This is where our comments wrapper code will reside. (I will be using the eventseer project name for the rest of this tutorial).

Now synchronize the database:

$ python manage.py syncdb

This creates the tables necessary for storing the comments.

Finally, add an entry in your base urls.py:

# eventseer/urls.py

from django.conf.urls.defaults import *

urlpatterns = patterns('',
    ...
    (r'^comments/', include('eventseer.mod_comments.urls')),
)

This is where we deviate from the standard documentation: Instead of routing all comment URLs to the bundled comments application we instead route them to our own custom application. This allows us to intercept comment URLs as required.

Step 2: Add the modified comments application

This is done the usual way:

$ python manage.py startapp mod_comments

In the previous step we added a reference to urls.py in the mod_comments application, so this file must be added:

# eventseer/mod_comments/urls.py

from django.conf.urls.defaults import *

urlpatterns = patterns('',
    (r'^delete/(?P<comment_id>\d+)/$', 'eventseer.mod_comments.views.delete'),
    (r'', include('django.contrib.comments.urls')),
)

The first line routes requests to /comments/delete/ to a custom delete view which we will create in the next step. For this example this is the only behavior we wish to modify. The last line ensures that all other requests are passed through to django.contrib.comments.urls.

Step 3: Create the wrapper view

We want to make sure that only the user who wrote a comment or administrators are allowed to delete it. This can be taken care of in mod_comments/views.py:

# eventseer/mod_comments/views.py

from django.contrib.auth.decorators import login_required
from django.contrib.comments.models import Comment
from django.http import Http404
from django.shortcuts import get_object_or_404
import django.contrib.comments.views.moderation as moderation

@login_required
def delete(request, comment_id):
    comment = get_object_or_404(Comment, pk=comment_id)
    if request.user == comment.user or \
       request.user.is_staff:
        return moderation.delete(request, comment_id)
    else:
        raise Http404

First we wrap the delete function with the login_required decorator so as to keep out non-authenticated users. We then check if the user who made the delete request actually owns the comment or if the user has administrator permissions. If either case holds true we pass the request on to the original delete method. Otherwise a 404 (page not found) error is raised.

We can of course modify the view method signature as required. In fact, the original delete method can be completely bypassed if that is what we want.

Step 4: Modifying delete behavior

By default the delete view shows a confirmation page (comments/delete.html) on GET requests and does the actual deletion on POST requests. After the deletion is done you will be shown the standard deleted.html template. Alternatively, adding a next parameter to the POST request will send the user to the given URL.

Say we wish to make some changes to the confirmation page, comments/delete.html. Instead of modifying the original in the Django distribution we create our own version. Create the directory eventseer/mod_comments/templates/comments and copy delete.html into it.

You will typically find this file in /usr/lib/python2.5/site-packages/django/contrib/comments/templates/comments on Linux systems or C:/Python2.5/Lib/site-packages/django/contrib/comments/templates/comments on Windows systems—your mileage may vary.

Typically you will wish to change this template to fit in with your site design, for instance by inheriting from your base templates.

To make the modified template take precedence, just add the new directory to settings.py:

# eventseer/settings.py

TEMPLATE_DIRS = (
    ...
    '/home/eventseer/src/eventseer/mod_comments/templates',
    ...
)

This will make sure that the Django URL resolver queries the eventseer/mod_comment/templates directory—where it will find our alternative version of comments/delete.html. Requests to other comment views that use the other default templates will be passed through to the correct default location.

Conclusion

The Django comments framework is the easiest and quickest way to add commenting functionality to your application. The flip side of this simplicity is that you will often have to extend the framework to make it behave according to your requirements. As this tutorial have shown, this can be done without making changes to the comments framework itself. One of the core strengths of Django is how it provides a set of reusable building blocks upon which you can add your own advanced functionality as required.

At the time of writing, the comments framework documentation is somewhat sparse. If you want to learn more about the inner workings of Django comments you will have to consult the source code—there are quite a few undocumented features that are really useful.

UPDATE

Tim Hoelscher noticed that I hadn’t said anything about how to work around the Django permission system, which was an unintentional omission.

The original delete method in django.contrib.comments.views.moderation requires that the user who wants to delete a comment has the comments.can_moderate permission. Regular users do not have this permission by default, so we have to set it for all users who are allowed to delete comments. (Remember, the wrapper delete makes sure that they can only delete their own comments.)

An easy way to solve this is to create a ‘user’ group, assign the comments.can_moderate permission to this group, and finally assign all users to this group. This can be done through the admin interface, with a few lines of SQL, or within your Django application. Refer to the Django permissions documentation for more information on how permissions work.

Next Page »