Perceptron Classification and Model Training

Posted by jake pinedo on Stack Overflow See other posts from Stack Overflow or by jake pinedo
Published on 2010-05-03T23:41:51Z Indexed on 2010/05/03 23:48 UTC
Read the original article Hit count: 296

Filed under:
|
|

I'm having an issue with understanding how the Perceptron algorithm works and implementing it.

        cLabel = 0          #class label: corresponds directly with featureVectors and tweets
    for m in range(miters):
        for point in featureVectors:
            margin = answers[cLabel] * self.dot_product(point, w)
            if margin <= 0:
                modifier = float(lrate) * float(answers[cLabel])
                modifiedPoint = point
                for x in modifiedPoint:
                    if x != 0:
                        x *= modifier
                newWeight = [modifiedPoint[i] + w[i] for i in range(len(w))]
                w = newWeight

    self._learnedWeight = w

This is what I've implemented so far, where I have a list of class labels in answers and a learning rate (lrate) and a list of feature vectors.

I run it for the numbers of iterations in miter and then get the final weight at the end. However, I'm not sure what to do with this weight. I've trained the perceptron and now I have to classify a set of tweets, but I don't know how to do that.

EDIT:

Specifically, what I do in my classify method is I go through and create a feature vector for the data I'm given, which isn't a problem at all, and then I take the self._learnedWeight that I get from the earlier training code and compute the dot-product of the vector and the weight. My weight and feature vectors include a bias in the 0th term of the list so I'm including that. I then check to see if the dotproduct is less than or equal to 0: if so, then I classify it as -1. Otherwise, it's 1.

However, this doesn't seem to be working correctly.

© Stack Overflow or respective owner

Related posts about python

Related posts about perceptron