Daniel Drezner has made a good start of considering how blogs may fit in to academic writing. The key issue is the need for peer review. Blogs have comment sections, but there's no way to verify that any of the commenters know what they're talking about.
A really ambitious goal would be to implement a mathematical reputation system that could centrally store blogger/commenter reputation ratings by subject. It would work as such:
1. A blogger writes an entry and selects a subject matter category (and possibly one or more sub-categories). Example: Law -- Intellectual Property
2. A commenter gives the entry a score (and leaves some commentary), and the reputation database stores that scoring instance. Example: Author: John Smith; Rater: Bill Williams; Subject: Law; Sub: Intellectual Property; Score: +1
3. Other commenters may then comment on the original article, or on the comments written about the article. All these ratings are stored in a central database.
4. Here's the fun part. The central reputation server will perform clustering analysis to recognize groups of like-minded scholars (within each subject area), based on their exchange of comments and ratings. By analyzing the patterns of clusters, approvals, and disapprovals, it should be possible to properly weight the comments and ratings of each person involved in the system. That's why it's important to store an entire rating transaction, and not merely the score that's given: the weight of the score will be dependent on the reputation of the one doing the rating (within the subject matter in question).
The reputation system should have several qualities:
1. It should be difficult for a single person to manipulate someone else's reputation significantly in a short period of time.
2. Reputations should change slowly over time.
3. Scores by from raters with high reputations should be weighted much more strongly than those from raters with low reputations. Perhaps exponentially.
4. High scores given between people in the same cluster should count for less than high scores given from outside the cluster.
5. Low scores given between people from different clusters should count for less than low scores given from inside their clusters.
If everyone started at zero, it wouldn't take long to build up a database of scores that accurately reflected the knowledge and experience of the participants. The essential component would be performing proper clustering within each subject area, and getting enough people involved to collect a representative sample.