Scrape zipcode table for different urls based on county

Posted by Dr.Venkman on Stack Overflow See other posts from Stack Overflow or by Dr.Venkman
Published on 2013-11-11T21:51:06Z Indexed on 2013/11/11 21:53 UTC
Read the original article Hit count: 236

I used lxml and ran into a wall as my new computer wont install lxml and the code doesnt work. I know this is simple - maybe some one can help with a beautiful soup script.

this is my code:

import codecs
import lxml as lh
from selenium import webdriver
import time 
import re

results = [] 
city    = [ 'amador']
state   = [ 'CA']





for state in states:
    for city in citys:
        browser = webdriver.Firefox()
        link2 = 'http://www.getzips.com/cgi-bin/ziplook.exe?What=3&County='+ city +'&State=' + state + '&Submit=Look+It+Up'
        browser.get(link2) 
        bcontent = browser.page_source
        zipcode     = bcontent[bcontent.find('<td width="15%"'):bcontent.find('<p>')+0]

        if len(zipcode) > 0:

            print    zipcode

        else:
            print 'none' 
        browser.quit()

Thanks for the help

© Stack Overflow or respective owner

Related posts about python

Related posts about web-scraping