Search Results

Search found 13816 results on 553 pages for 'python larry'.

Page 369/553 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Programmatically sync the db in Django

    - by Attila Oláh
    I'm trying to sync my db from a view, something like this: from django import http from django.core import management def syncdb(request): management.call_command('syncdb') return http.HttpResponse('Database synced.') The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything? I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.

    Read the article

  • Tkinter Gui to read in csv file and generate buttons based on the entries in the first row

    - by Thomas Jensen
    I need to write a gui in Tkinter that can choose a csv file, read it in and generate a sequence of buttons based on the names in the first row of the csv file (later the data in the csv file should be used to run a number of simulations). So far I have managed to write a Tkinter gui that will read the csv file, but I am stomped as to how I should proceed: from Tkinter import * import tkFileDialog import csv class Application(Frame): def __init__(self, master = None): Frame.__init__(self,master) self.grid() self.createWidgets() def createWidgets(self): top = self.winfo_toplevel() self.menuBar = Menu(top) top["menu"] = self.menuBar self.subMenu = Menu(self.menuBar) self.menuBar.add_cascade(label = "File", menu = self.subMenu) self.subMenu.add_command( label = "Read Data",command = self.readCSV) def readCSV(self): self.filename = tkFileDialog.askopenfilename() f = open(self.filename,"rb") read = csv.reader(f, delimiter = ",") app = Application() app.master.title("test") app.mainloop() Any help is greatly appreciated!

    Read the article

  • Rearranging a sequence

    - by sarah
    I'm have trouble rearranging sequences so the amount of letters in the given original sequence are the same in the random generated sequences. For example: If i have a string 'AAAC' I need that string rearranged randomly so the amount of A's and C's are the same.

    Read the article

  • SQLAlchemy declarative syntax with autoload in Pylons

    - by Juliusz Gonera
    I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py): def init_model(engine): """Call me before using any of the tables or classes in the model""" t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine) orm.mapper(Event, t_events) Session.configure(bind=engine) class Event(object): pass This works fine, but I would like to use declarative syntax: class Event(Base): __tablename__ = 'events' __table_args__ = {'schema': 'events', 'autoload': True} Unfortunately, this way I get: sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding meta.Base.metadata.bind(engine) to environment.py but it doesn't work. Anyone has found some elegant solution?

    Read the article

  • PyParsing: Not all tokens passed to setParseAction()

    - by Rosarch
    I'm parsing sentences like "CS 2110 or INFO 3300". I would like to output a format like: [[("CS" 2110)], [("INFO", 3300)]] To do this, I thought I could use setParseAction(). However, the print statements in statementParse() suggest that only the last tokens are actually passed: >>> statement.parseString("CS 2110 or INFO 3300") Match [{Suppress:("or") Re:('[A-Z]{2,}') Re:('[0-9]{4}')}] at loc 7(1,8) string CS 2110 or INFO 3300 loc: 7 tokens: ['INFO', 3300] Matched [{Suppress:("or") Re:('[A-Z]{2,}') Re:('[0-9]{4}')}] -> ['INFO', 3300] (['CS', 2110, 'INFO', 3300], {'Course': [(2110, 1), (3300, 3)], 'DeptCode': [('CS', 0), ('INFO', 2)]}) I expected all the tokens to be passed, but it's only ['INFO', 3300]. Am I doing something wrong? Or is there another way that I can produce the desired output? Here is the pyparsing code: from pyparsing import * def statementParse(str, location, tokens): print "string %s" % str print "loc: %s " % location print "tokens: %s" % tokens DEPT_CODE = Regex(r'[A-Z]{2,}').setResultsName("DeptCode") COURSE_NUMBER = Regex(r'[0-9]{4}').setResultsName("CourseNumber") OR_CONJ = Suppress("or") COURSE_NUMBER.setParseAction(lambda s, l, toks : int(toks[0])) course = DEPT_CODE + COURSE_NUMBER.setResultsName("Course") statement = course + Optional(OR_CONJ + course).setParseAction(statementParse).setDebug()

    Read the article

  • matplotlib.pyplot, preserve aspect ratio of the plot

    - by Headcrab
    Assuming we have a polygon coordinates as polygon = [(x1, y1), (x2, y2), ...], the following code displays the polygon: import matplotlib.pyplot as plt plt.fill(*zip(*polygon)) plt.show() By default it is trying to adjust the aspect ratio so that the polygon (or whatever other diagram) fits inside the window, and automatically changing it so that it fits even after resizing. Which is great in many cases, except when you are trying to estimate visually if the image is distorted. How to fix the aspect ratio to be strictly 1:1? (Not sure if "aspect ratio" is the right term here, so in case it is not - I need both X and Y axes to have 1:1 scale, so that (0, 1) on both X and Y takes an exact same amount of screen space. And I need to keep it 1:1 no matter how I resize the window.)

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • Sqlalchemy complex in_ clause

    - by lostlogic
    I'm trying to find a way to cause sqlalchemy to generate sql of the following form: select * from t where (a,b) in ((a1,b1),(a2,b2)); Is this possible? If not, any suggestions on a way to emulate it? Thanks kindly!

    Read the article

  • how to fill a part of a circle using PIL?

    - by valya
    hello. I'm trying to use PIL for a task but the result is very dirty. What I'm doing is trying to fill a part of a piece of a circle, as you can see on the image. Here is my code: def gen_image(values): side = 568 margin = 47 image = Image.open(settings.MEDIA_ROOT + "/i/promo_circle.jpg") draw = ImageDraw.Draw(image) draw.ellipse((margin, margin, side-margin, side-margin), outline="white") center = side/2 r = side/2 - margin cnt = len(values) for n in xrange(cnt): angle = n*(360.0/cnt) - 90 next_angle = (n+1)*(360.0/cnt) - 90 nr = (r * values[n] / 5) max_r = r min_r = nr for cr in xrange(min_r*10, max_r*10): cr = cr/10.0 draw.arc((side/2-cr, side/2-cr, side/2+cr, side/2+cr), angle, next_angle, fill="white") return image

    Read the article

  • How to make Universal Feed Parser only parse feeds?

    - by piquadrat
    I'm trying to get content from external feeds on my Django web site with Universal Feed Parser. I want to have some user error handling, e.g. if the user supplies a URL that is not a feed. When I tried how feedparser responds to faulty input, I was surprised to see that feedparser does not throw any Exceptions at all. E.g. on HTML content, it tries to parse some information from the HTML code, and on non-existing domains, it returns a mostly empty dictionary: {'bozo': 1, 'bozo_exception': URLError(gaierror(-2, 'Name or service not known'),), 'encoding': 'utf-8', 'entries': [], 'feed': {}, 'version': None} Other faulty input manifest themselves in the status_code or the namespaces values in the returned dictionary. So, what's the best approach to have sane error checking without resorting to an endless cascade of if .. elif .. elif ...?

    Read the article

  • Auto filling polymorphic table on save or on delete in django

    - by Mo J. Mughrabi
    Hi, Am working on an project in which I made an app "core" it will contain some of the reused models across my projects, most of those are polymorphic models (Generic content types) and will be linked to different models. Example below am trying to create audit model and will be linked to several models which may require auditing. This is the polls/models.py from django.db import models from django.contrib.auth.models import User from core.models import * from django.contrib.contenttypes import generic class Poll(models.Model): ## TODO: Document question = models.CharField(max_length=300) question_slug=models.SlugField(editable=False) start_poll_at = models.DateTimeField(null=True) end_poll_at = models.DateTimeField(null=True) is_active = models.BooleanField(default=True) audit_obj=generic.GenericRelation(Audit) def __unicode__(self): return self.question class Choice(models.Model): ## TODO: Document choice = models.CharField(max_length=200) poll=models.ForeignKey(Poll) audit_obj=generic.GenericRelation(Audit) class Vote(models.Model): ## TODO: Document choice=models.ForeignKey(Choice) Ip_Address=models.IPAddressField(editable=False) vote_at=models.DateTimeField("Vote at", editable=False) here is the core/modes.py from django.db import models from django.contrib.auth.models import User from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes import generic class Audit(models.Model): ## TODO: Document # Polymorphic model using generic relation through DJANGO content type created_at = models.DateTimeField("Created at", auto_now_add=True) created_by = models.ForeignKey(User, db_column="created_by", related_name="%(app_label)s_%(class)s_y+") updated_at = models.DateTimeField("Updated at", auto_now=True) updated_by = models.ForeignKey(User, db_column="updated_by", null=True, blank=True, related_name="%(app_label)s_%(class)s_y+") content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField(unique=True) content_object = generic.GenericForeignKey('content_type', 'object_id') and here is polls/admin.py from django.core.context_processors import request from polls.models import Poll, Choice from core.models import * from django.contrib import admin class ChoiceInline(admin.StackedInline): model = Choice extra = 3 class PollAdmin(admin.ModelAdmin): inlines = [ChoiceInline] admin.site.register(Poll, PollAdmin) Am quite new to django, what am trying to do here, insert a record in audit when a record is inserted in polls and then update that same record when a record is updated in polls.

    Read the article

  • Getting unpredictable data into a tabular format

    - by Acorn
    The situation: Each page I scrape has <input> elements with a title= and a value= I don't know what is going to be on the page. I want to have all my collected data in a single table at the end, with a column for each title. So basically, I need each row of data to line up with all the others, and if a row doesn't have a certain element, then it should be blank (but there must be something there to keep the alignment). eg. First page has: {animal: cat, colour: blue, fruit: lemon, day: monday} Second page has: {animal: fish, colour: green, day: saturday} Third page has: {animal: dog, number: 10, colour: yellow, fruit: mango, day: tuesday} Then my resulting table should be: animal | number | colour | fruit | day cat | none | blue | lemon | monday fish | none | green | none | saturday dog | 10 | yellow | mango | tuesday Although it would be good to keep the order of the title value pairs, which I know dictionaries wont do. So basically, I need to generate columns from all the titles (kept in order but somehow merged together) What would be the best way of going about this without knowing all the possible titles and explicitly specifying an order for the values to be put in?

    Read the article

  • numpy.equal with string values

    - by Morgoth
    The numpy.equal function does not work if a list or array contains strings: >>> import numpy >>> index = numpy.equal([1,2,'a'],None) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: function not supported for these types, and can't coerce safely to supported types What is the easiest way to workaround this without looping through each element? In the end, I need index to contain a boolean array indicating which elements are None.

    Read the article

  • mod_wsgi daemon mode vs threaded fastcgi

    - by t0ster
    Can someone explain the difference between apache mod_wsgi in daemon mode and django fastcgi in threaded mode. They both use threads for concurrency I think. Supposing that I'm using nginx as front end to apache mod_wsgi. UPDATE: I'm comparing django built in fastcgi(./manage.py method=threaded maxchildren=15) and mod_wsgi in 'daemon' mode(WSGIDaemonProcess example threads=15). They both use threads and acquire GIL, am I right?

    Read the article

  • socket.accept error 24: To many open files

    - by Creotiv
    I have a problem with open files under my Ubuntu 9.10 when running server in Python2.6 And main problem is that, that i don't know why it so.. I have set ulimit -n = 999999 net.core.somaxconn = 999999 fs.file-max = 999999 and lsof gives me about 12000 open files when server is running. And also i'm using epoll. But after some time it's start giving exeption: File "/usr/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files And i don't know how it can reach file limit when it isn't reached. Thanks for help)

    Read the article

  • how to speed up the code??

    - by kaushik
    i have very huge code about 600 lines plus. cant post the whole thing here. but a particular code snippet is taking so much time,leading to problems. here i post that part of code please tell me what to do speed up the processing.. please suggest the part which may be the reason and measure to improve them if this small part of code is understandable. using_data={} def join_cost(a , b): global using_data #print a #print b save_a=[] save_b=[] print 1 #for i in range(len(m)): #if str(m[i][0])==str(a): save_a=database_index[a] #for i in range(len(m)): # if str(m[i][0])==str(b): #print 'save_a',save_a #print 'save_b',save_b print 2 save_b=database_index[b] using_data[save_a[0]]=save_a s=str(save_a[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') print 3 for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) end_time=save_a[4] #print end_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(end_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 q=[] print 4 l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') q=k3.split(' ') #print q print 5 s=str(save_b[1]).replace('phone','text') s=str(s)+'.pm' p=os.path.join("c:/begpython/wavnk/",s) x=open(p , 'r') for i in range(6): x.readline() k2='a' j=0 o=[] while k2 is not '': k2=x.readline() k2=k2.rstrip('\n') oj=k2.split(' ') o=o+[oj] #print o[j] j=j+1 #print j #print o[2][0] temp=long(1232332) strt_time=save_b[3] #print strt_time k=(j-1) for i in range(k): diff=float(o[i][0])-float(strt_time) if diff<0: diff=diff*(-1) if temp>diff: temp=diff pm_row=i #print pm_row #print temp #print o[pm_row] #pm_row=3 w=[] l=str(p).replace('.pm','.mcep') z=open(l ,'r') for i in range(pm_row): z.readline() k3=z.readline() k3=k3.rstrip('\n') w=k3.split(' ') #print w cost=0 for i in range(12): #print q[i] #print w[i] h=float(q[i])-float(w[i]) cost=cost+math.pow(h,2) j_cost=math.sqrt(cost) #print cost return j_cost def target_cost(a , b): a=(b+1)*3 b=(a+1)*2 t_cost=(a+b)*5/2 return t_cost r1='shht:ra_77' r2='grx_18' g=[] nodes=[] nodes=nodes+[[r1]] for i in range(len(y_in_db_format)): g=y_in_db_format[i] #print g #print g[0] g.remove(str(g[0])) nodes=nodes+[g] nodes=nodes+[[r2]] print nodes print "lenght of nodes",len(nodes) lists=[] #lists=lists+[r1] for i in range(len(nodes)): for j in range(len(nodes[i])): lists=lists+[nodes[i][j]] #lists=lists+[r2] print lists distance={} for i in range(len(lists)): if i==0: distance[str(lists[i])]=0 else: distance[str(lists[i])]=long(123231223) #print distance group_dist=[] infinity=long(123232323) for i in range(len(nodes)): distances=[] for j in range(len(nodes[i])): #distances=[] if i==0: distances=distances+[[nodes[i][j], 0]] else: distances=distances+[[nodes[i][j],infinity]] group_dist=group_dist+[distances] #print distances print "group_distances",group_dist #print "check",group_dist[0][0][1] #costs={} #for i in range(len(lists)): #if i==0: # costs[str(lists[i])]=1 #else: # costs[str(lists[i])]=get_selfcost(lists[i]) path=[] for i in range(len(nodes)): mini=[] if i!=(len(nodes)-1): #temp=long(123234324) #Now calculate the cost between the current node and each of its neighbour for k in range(len(nodes[(i+1)])): for j in range(len(nodes[i])): current=nodes[i][j] #print "current_node",current j_distance=join_cost( current , nodes[i+1][k]) #t_distance=target_cost( current , nodes[i+1][k]) t_distance=34 #print distance #print "distance between current and neighbours",distance total_distance=(.5*(float(group_dist[i][j][1])+float(j_distance))+.5*(float(t_distance))) #print "total distance between the intial_nodes and current neighbour",total_distance if int(group_dist[i+1][k][1]) > int(total_distance): group_dist[i+1][k][1]=total_distance #print "updated distance",group_dist[i+1][k][1] a=current #print "the neighbour",nodes[i+1][k],"updated the value",a mini=mini+[[str(nodes[i+1][k]),a]] print mini

    Read the article

  • Search for a String and replace it with a variable

    - by chrissygormley
    Hello, I am trying to use regular expression to search a document fo a UUID number and replace the end of it with a new number. The code I have so far is: read_file = open('test.txt', 'r+') write_file = open('test.txt', 'w') r = re.compile(r'(self.uid\s*=\s*5EFF837F-EFC2-4c32-A3D4\s*)(\S+)') for l in read_file: m1 = r.match(l) if m1: new=(str,m1.group(2)) new?????? This where I get stuck. The file test.txt has the below UUID stored in it: self.uid = '5EFF837F-EFC2-4c32-A3D4-D15C7F9E1F22' I want to replace the part D15C7F9E1F22. I have also tried this: r = re.compile(r'(self.uid\s*=\s*)(\S+)') for l in fp: m1 = r.match(l) new=map(int,m1.group(2).split("-") new[4]='RHUI5345JO' But I cannot seem to match the string. Thanks in advance for any help.

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • Get particular row as series from pandas dataframe

    - by Pratyush
    How do we get a particular filtered row as series? Example dataframe: >>> df = pd.DataFrame({'date': [20130101, 20130101, 20130102], 'location': ['a', 'a', 'c']}) >>> df date location 0 20130101 a 1 20130101 a 2 20130102 c I need to select the row where location is c as a series. I tried: row = df[df["location"] == "c"].head(1) # gives a dataframe row = df.ix[df["location"] == "c"] # also gives a dataframe with single row In either cases I can't the row as series.

    Read the article

  • Performing non-blocking requests? - Django

    - by RadiantHex
    Hi folks, I have been playing with other frameworks, such as NodeJS, lately. I love the possibility to return a response, and still being able to do further operations. e.g. def view(request): do_something() return HttpResponse() do_more_stuff() #not possible!!! Maybe Django already offers a way to perform operations after returning a request, if that is the case that would be great. Help would be very much appreciated! =D

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >