Search Results

Search found 17149 results on 686 pages for 'python twitter'.

Page 388/686 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • Shaders with pygtkglext

    - by qba
    Do someone know how to get glsl shaders work in gtk-opengl window? With glut all glCreateProgram etc. functions works, but when I tried to put the same gl code into pygtkglext window, its complaining about NullReference: OpenGL.error.NullFunctionError: Attempt to call an undefined function glCreateProgram, check for bool(glCreateProgram) before calling So then I from OpenGL.GL.ARB.shader_objects import *, but the result is similar: OpenGL.error.NullFunctionError: Attempt to call an undefined function glCreateProgramObjectARB, check for bool(glCreateProgramObjectARB) before calling Any idea will be useful.

    Read the article

  • How to set up Atana Studio 3 Themes in Pydev

    - by willy1234x1
    I've installed the Aptana Studio 3 preview and noticed it has support for themes (such as a bespin style or Ruby envy) and I'd love to use the Bespin one in Pydev but so far I've had no luck getting it to work, anyone have a clue as to how to get it to work? Video showing the themes in action.

    Read the article

  • How to get the original variable name of variable passed to a function

    - by Acorn
    Is it possible to get the original variable name of a variable passed to a function? E.g. foobar = "foo" def func(var): print var.origname So that: func(foobar) Returns: >>foobar EDIT: All I was trying to do was make a function like: def log(soup): f = open(varname+'.html', 'w') print >>f, soup.prettify() f.close() .. and have the function generate the filename from the name of the variable passed to it. I suppose if it's not possible I'll just have to pass the variable and the variable's name as a string each time.

    Read the article

  • Euclidian Distances between points

    - by R S
    I have an array of points in numpy: points = rand(dim, n_points) And I want to: Calculate all the l2 norm (euclidian distance) between a certain point and all other points Calculate all pairwise distances. and preferably all numpy and no for's.

    Read the article

  • Reducing size of a character array in Numpy

    - by Morgoth
    Given a character array: In [21]: x = np.array(['a ','bb ','cccc ']) One can remove the whitespace using: In [22]: np.char.strip(x) Out[22]: array(['a', 'bb', 'cccc'], dtype='|S8') but is there a way to also shrink the width of the column to the minimum required size, in the above case |S4?

    Read the article

  • In Elixir or SQLAlchemy, is there a way to also store a comment for a/each field in my entities?

    - by kchau
    Our project is basically a web interface to several systems of record. We have many tables mapped, and the names of each column aren't as well named and intuitive as we'd like... The users would like to know what data fields are available (i.e. what's been mapped from the database). But, it's pointless to just give them column names like: USER_REF1, USER_REF2, etc. So, I was wondering, is there a way to provide a comment in the declaration of my field? E.g. class SegregationCode(Entity): using_options(tablename="SEGREGATION_CODES") segCode = Field(String(20), colname="CODE", ... primary_key=True) #Have a comment attr too? If not, any suggestions?

    Read the article

  • SQLAlchemy: select over multiple tables

    - by ahojnnes
    Hi, I wanted to optimize my database query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( not_(link_table.c.id.in_( select( columns=[request_table.c.recipient], whereclause=request_table.c.donator==donator.id ).as_scalar() )), link_table.c.id!=donator.id, ), limit=20, ).execute().fetchall() and tried to merge those two selects in one query: link_list = select( columns=[link_table.c.rating, link_table.c.url, link_table.c.donations_in], whereclause=and_( link_table.c.active==True, link_table.c.id!=donator.id, request_table.c.donator==donator.id, link_table.c.id!=request_table.c.recipient, ), limit=20, order_by=[link_table.c.rating.desc()] ).execute().fetchall() the database-schema looks like: link_table = Table('links', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('url', Unicode(250), index=True, unique=True), Column('registration_date', DateTime), Column('donations_in', Integer), Column('active', Boolean), ) request_table = Table('requests', metadata, Column('id', Integer, primary_key=True, autoincrement=True), Column('recipient', Integer, ForeignKey('links.id')), Column('donator', Integer, ForeignKey('links.id')), Column('date', DateTime), ) There are several links (donator) in request_table pointing to one link in the link_table. I want to have links from link_table, which are not yet "requested". But this does not work. Is it actually possible, what I'm trying to do? If so, how would you do that? Thank you very much in advance!

    Read the article

  • Best way to get back to using the power of lxml after having to use a regex to find something in an

    - by PyNEwbie
    I am trying to rip some text out of a large number of html documents (numbers in the hundreds of thousands). The documents are really forms but they are prepared by a very large group of different organizations so there is significant variation in how they create the document. For example, the documents are divided into chapters. I might want to extract the contents of Chapter 5 from every document so I can analyze the content of the chapter. Initially I thought this would be easy but it turns out that the authors might use a set of non-nested tables throughout the document to hold the content so that Chapter n could be displayed using td tags inside a table. Or they might use other elements such as p tags H tags, div tags or any other block level element. After trying repeatedly to use lxml to help me identify the beginning and end of each chapter I have determined that it is a lot cleaner to use a regular expression because in every case, no matter what the enclosing html element is the chapter label is always in the form of >Chapter # It is a little more complicated in that there might be some white space or non-breaking space represented in different ways (  or   or just spaces). Nonetheless it was trivial to write a regular expression to identify the beginning of each section. (The beginning of one section is the end of the previous section.) But now I want to use lxml to get the text out. My thought is that I have really no choice but to walk along my string to find the close tag for the element that encloses the text I am using to find the relevant section. That is here is one example where the element holding the Chapter name is a div <div style="DISPLAY: block; MARGIN-LEFT: 0pt; TEXT-INDENT: 0pt; MARGIN-RIGHT: 0pt" align="left"><font style="DISPLAY: inline; FONT-WEIGHT: bold; FONT-SIZE: 10pt; FONT-FAMILY: Times New Roman">Chapter 1.&#160;&#160;&#160;Our Beginnings.</font></div> So I am imagining that I would begin at the location where I found the match for chapter 1 and set up a regular expressions to find the next </div|</td|</p|</h1 . . . So at this point I have identified the type of element holding my chapter heading I can use the same logic to find all of the text that is within that element that is set up a regular expression to help me mark from >Chapter 1.&#160;&#160;&#160;Our Beginnings.< So I have identified where my Chapter 1 begins I can do the same for chapter 2 (which is where Chapter 1 ends) Now I am imagining that I am going to snip the document beginning at the opening of the element that I identified as the element the indicates where chapter 1 begins and ending just before the opening of the element that I identified as the element that indicates where Chapter 2 begins. The string that I have identified will then be fed to lxml to use its power to get the content. I am going to all of this trouble because I have read over and over - never use a regular expression to extract content from html documents and I have not hit on a way to be as accurate with lxml to identify the starting and ending locations for the text I want to extract. For example, I can never be certain that the subtitle of Chapter 1 is Our Beginnings it could be Our Red Canary. Let me say that I spent two solid days trying with lxml to be confident that I had the beginning and ending elements and I could only be accurate <60% of the time but a very short regular expression has given me better than 95% success. I have a tendency to make things more complicated than necessary so I am wondering if anyone has seen or solved a similar problems and if they had an approach (not the details mind you) that they would like to offer.

    Read the article

  • Help converting code using httlib2 to use urllib2

    - by ThinkCode
    What am I trying to do? Visit a site, retrieve cookie, visit the next page by sending in the cookie info. It all works but httplib2 is giving me one too many problems with socks proxy on one site. http = httplib2.Http() main_url = 'http://mywebsite.com/get.aspx?id='+ id +'&rows=25' response, content = http.request(main_url, 'GET', headers=headers) main_cookie = response['set-cookie'] referer = 'http://google.com' headers = {'Content-type': 'application/x-www-form-urlencoded', 'Cookie': main_cookie, 'User-Agent' : USER_AGENT, 'Referer' : referer} How to do the same exact thing using urllib2 (cookie retrieving, passing to the next page on the same site)? Thank you.

    Read the article

  • pylab.savefig() and pylab.show() image difference

    - by Jack1990
    I'm making an script to automatically create plots from .xvg files, but there's a problem when I'm trying to use pylab's savefig() method. Using pylab.show() and saving from there, everything's fine. Using pylab.show() Using pylab.savefig() def producePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): fc = sp.interp1d(timestep[::jump], energy_values[::jump],kind='cubic') xnew = numpy.linspace(0, finish, finish*2) pylab.plot(xnew, fc(xnew),type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def produceSimplePlot(timestep, energy_values,type_line = 'r', jump = 1,finish = 100): pylab.plot(timestep, energy_values,type_line) pylab.xlabel('Time in ps ') pylab.ylabel('kJ/mol') pylab.xlim(xmin=0, xmax=finish) def linearRegression(timestep, energy_values, type_line = 'g'): #, jump = 1,finish = 100): from scipy import stats import numpy #print 'fuck' timestep = numpy.asarray(timestep) slope, intercept, r_value, p_value, std_err = stats.linregress(timestep,energy_values) line = slope*timestep+intercept pylab.plot(timestep, line, type_line) def plottingTime(Title,file_name, timestep, energy_values ,loc, jump , finish): pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) linearRegression(timestep,energy_values) import numpy Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2f" %(Average),'Linear Reg'),loc) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4], bbox_inches= None, pad_inches=0) #if __name__ == '__main__': #plottingTime(Title,timestep1, energy_values, jump =10, finish = 4800) def specialCase(Title,file_name, timestep, energy_values,loc, jump, finish): #print 'Working here ...?' pylab.title(Title) producePlot(timestep,energy_values, 'b',jump, finish) import numpy from pylab import * Average = numpy.average(energy_values) #print Average pylab.legend(("Average = %.2g" %(Average), Title),loc) locs,labels = yticks() yticks(locs, map(lambda x: "%.3g" % x, locs)) #pylab.show() pylab.savefig('%s.jpg' %file_name[:-4] , bbox_inches= None, pad_inches=0) Thanks in advance, John

    Read the article

  • The dictionary need to add every word in SpellingMistakes and the line number but it only adds the l

    - by Will Boomsight
    modules import sys import string Importing and reading the files form the Command Prompt Document = open(sys.argv[1],"r") Document = open('Wc.txt', 'r') Document = Document.read().lower() Dictionary = open(sys.argv[2],"r") Dictionary = open('Dict.txt', 'r') Dictionary = Dictionary.read() def Format(Infile): for ch in string.punctuation: Infile = Infile.replace(ch, "") for no in string.digits: Infile = Infile.replace(no, " ") Infile = Infile.lower() return(Infile) def Corrections(Infile, DictWords): Misspelled = set([]) Infile = Infile.split() DictWords = DictWords.splitlines() for word in Infile: if word not in DictWords: Misspelled.add(word) Misspelled = sorted(Misspelled) return (Misspelled) def Linecheck(Infile,ErrorWords): Infile = Infile.split() lineno = 0 Noset = list() for line in Infile: lineno += 1 line = line.split() for word in line: if word == ErrorWords: Noset.append(lineno) sorted(Noset) return(Noset) def addkey(error,linenum): Nodict = {} for line in linenum: Nodict.setdefault(error,[]).append(linenum) return Nodict FormatDoc = Format(Document) SpellingMistakes = Corrections(FormatDoc,Dictionary) alp = str(SpellingMistakes) for word in SpellingMistakes: nSet = str(Linecheck(FormatDoc,word)) nSet = nSet.split() linelist = addkey(word, nSet) print(linelist) # # for word in Nodict.keys(): # Nodict[word].append(line) Prints each incorrect word on a new line

    Read the article

  • What's an appropriate HTTP status code to return by a REST API service for a validation failure?

    - by michaeljoseph
    I'm currently returning 401 Unauthorized whenever I encounter a validation failure in my Django/Piston based REST API application. Having had a look at the HTTP Status Code Registry I'm not convinced that this is an appropriate code for a validation failure, what do y'all recommend? 400 Bad Request 401 Unauthorized 403 Forbidden 405 Method Not Allowed 406 Not Acceptable 412 Precondition Failed 417 Expectation Failed 422 Unprocessable Entity 424 Failed Dependency Update: "Validation failure" above means an application level data validation failure ie. incorrectly specified datetime, bogus email address etc.

    Read the article

  • Sending data from one Protocol to another Protocol in Twisted?

    - by veb
    Hi! One of my protocols is connected to a server, and with the output of that I'd like to send it to the other protocol. I need to access the 'msg' method in ClassA from ClassB but I keep getting: exceptions.AttributeError: 'NoneType' object has no attribute 'write' Actual code: http://pastebin.com/MQPhduSY Any ideas please? :-)

    Read the article

  • gae error : Error: Server Error, how to debut it .

    - by zjm1126
    when i upload my project to google-app-engine , it show this : Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it. why ? how can i debug this error ? thanks

    Read the article

  • Convert a sequence of sequences to a dictionary and vice-versa

    - by louis
    One way to manually persist a dictionary to a database is to flatten it into a sequence of sequences and pass the sequence as an argument to cursor.executemany(). The opposite is also useful, i.e. reading rows from a database and turning them into dictionaries for later use. What's the best way to go from myseq to mydict and from mydict to myseq? >>> myseq = ((0,1,2,3), (4,5,6,7), (8,9,10,11)) >>> mydict = {0: (1, 2, 3), 8: (9, 10, 11), 4: (5, 6, 7)}

    Read the article

  • Regex for Matching First Alphanumeric Character skipping (The |An? )

    - by TheLizardKing
    I have a list of artists, albums and tracks that I want to sort using the first letter of their respective name. The issue arrives when I want to ignore "The ", "A ", "An " and other various non-alphanumeric characters (Talking to you "Weird Al" Yankovic and [dialog]). Django has a nice start '^(An?|The) +' but I want to ignore those and a few others of my choice. I am doing this in Django, using a MySQL db with utf8_bin collation.

    Read the article

  • Inexpensive ways to add seek to a filetype object

    - by becomingGuru
    PdfFileReader reads the content from a pdf file to create an object. I am querying the pdf from a cdn via urllib.urlopen(), this provides me a file like object, which has no seek. PdfFileReader, however uses seek. What is the simple way to create a PdfFileReader object from a pdf downloaded via url. Now, what can I do to avoid writing to disk and reading it again via file(). Thanks in advance.

    Read the article

  • how to format date when i load data from google-app-engine..

    - by zjm1126
    i use remote_api to load data from google-app-engine. appcfg.py download_data --config_file=helloworld/GreetingLoad.py --filename=a.csv --kind=Greeting helloworld the setting is: class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', str, None), ]) exporters = [AlbumExporter] and i download a.csv is : the date is not readable , and the date in appspot.com admin is : so how to get the full date ?? thanks i change this : class AlbumExporter(bulkloader.Exporter): def __init__(self): bulkloader.Exporter.__init__(self, 'Greeting', [('author', str, None), ('content', str, None), ('date', lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date(), None), ]) exporters = [AlbumExporter] but the error is :

    Read the article

  • Geocoding non-addresses: Geopy

    - by Phil Donovan
    Using geopy to geocode alcohol outlets in NZ. The problem I have is that some places do not have street addresses but are places in Google Maps. For example, plugging: Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlborough 7250 into Google Maps via the browser GUI gives me However, using that in Geopy I get a GQueryError saying this geographic location does not exist. Here is the code for geocoding: def GeoCode(address): g=geocoders.Google(domain="maps.google.co.nz") geoloc = g.geocode(address, exactly_one=False) place, (lat, lng) = geoloc[0] GeoOut = [] GeoOut.extend([place, lat, lng]) return GeoOut GeoCode("Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlboroguh 7250") Meanwhile, I notice that "Eiffel Tower" works fine. Is there away to solve this and can someone explain the difference between The Eiffel Tower and Furneaux Lodge within Google 'locations'?

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >