Search Results

Search found 13542 results on 542 pages for 'python socketserver'.

Page 370/542 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • Reverse mapping from a table to a model in SQLAlchemy

    - by Jace
    To provide an activity log in my SQLAlchemy-based app, I have a model like this: class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) activity_by_id = Column(Integer, ForeignKey('users.id'), nullable=False) activity_by = relation(User, primaryjoin=activity_by_id == User.id) activity_at = Column(DateTime, default=datetime.utcnow, nullable=False) activity_type = Column(SmallInteger, nullable=False) target_table = Column(Unicode(20), nullable=False) target_id = Column(Integer, nullable=False) target_title = Column(Unicode(255), nullable=False) The log contains entries for multiple tables, so I can't use ForeignKey relations. Log entries are made like this: doc = Document(name=u'mydoc', title=u'My Test Document', created_by=user, edited_by=user) session.add(doc) session.flush() # See note below log = ActivityLog(activity_by=user, activity_type=ACTIVITY_ADD, target_table=Document.__table__.name, target_id=doc.id, target_title=doc.title) session.add(log) This leaves me with three problems: I have to flush the session before my doc object gets an id. If I had used a ForeignKey column and a relation mapper, I could have simply called ActivityLog(target=doc) and let SQLAlchemy do the work. Is there any way to work around needing to flush by hand? The target_table parameter is too verbose. I suppose I could solve this with a target property setter in ActivityLog that automatically retrieves the table name and id from a given instance. Biggest of all, I'm not sure how to retrieve a model instance from the database. Given an ActivityLog instance log, calling self.session.query(log.target_table).get(log.target_id) does not work, as query() expects a model as parameter. One workaround appears to be to use polymorphism and derive all my models from a base model which ActivityLog recognises. Something like this: class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) title = Column(Unicode(255), nullable=False) edited_at = Column(DateTime, onupdate=datetime.utcnow, nullable=False) entity_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': entity_type} class Document(Entity): __tablename__ = 'documents' __mapper_args__ = {'polymorphic_identity': 'document'} body = Column(UnicodeText, nullable=False) class ActivityLog(Base): __tablename__ = 'activitylog' id = Column(Integer, primary_key=True) ... target_id = Column(Integer, ForeignKey('entities.id'), nullable=False) target = relation(Entity) If I do this, ActivityLog(...).target will give me a Document instance when it refers to a Document, but I'm not sure it's worth the overhead of having two tables for everything. Should I go ahead and do it this way?

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • How to repeatedly show a Dialog with PyGTK / Gtkbuilder?

    - by Julian
    I have created a PyGTK application that shows a Dialog when the user presses a button. The dialog is loaded in my __init__ method with: builder = gtk.Builder() builder.add_from_file("filename") builder.connect_signals(self) self.myDialog = builder.get_object("dialog_name") In the event handler, the dialog is shown with the command self.myDialog.run(), but this only works once, because after run() the dialog is automatically destroyed. If I click the button a second time, the application crashes. I read that there is a way to use show() instead of run() where the dialog is not destroyed, but I feel like this is not the right way for me because I would like the dialog to behave modally and to return control to the code only after the user has closed it. Is there a simple way to repeatedly show a dialog using the run() method using gtkbuilder? I tried reloading the whole dialog using the gtkbuilder, but that did not really seem to work, the dialog was missing all child elements (and I would prefer to have to use the builder only once, at the beginning of the program). [SOLUTION] As pointed out by the answer below, using hide() does the trick. But one has to take care that the dialog is in fact destroyed if one does not catch its "delete-event". A simple example that works is: import pygtk import gtk class DialogTest: def rundialog(self, widget, data=None): self.dia.show_all() result = self.dia.run() def destroy(self, widget, data=None): gtk.main_quit() def closedialog(self, widget, data=None): self.dia.hide() return True def __init__(self): self.window = gtk.Window(gtk.WINDOW_TOPLEVEL) self.window.connect("destroy", self.destroy) self.dia = gtk.Dialog('TEST DIALOG', self.window, gtk.DIALOG_MODAL | gtk.DIALOG_DESTROY_WITH_PARENT) self.dia.vbox.pack_start(gtk.Label('This is just a Test')) self.dia.connect("delete-event", self.closedialog) self.button = gtk.Button("Run Dialog") self.button.connect("clicked", self.rundialog, None) self.window.add(self.button) self.button.show() self.window.show() if __name__ == "__main__": testApp = DialogTest() gtk.main()

    Read the article

  • SQLAlchemy declarative syntax with autoload in Pylons

    - by Juliusz Gonera
    I would like to use autoload to use an existings database. I know how to do it without declarative syntax (model/_init_.py): def init_model(engine): """Call me before using any of the tables or classes in the model""" t_events = Table('events', Base.metadata, schema='events', autoload=True, autoload_with=engine) orm.mapper(Event, t_events) Session.configure(bind=engine) class Event(object): pass This works fine, but I would like to use declarative syntax: class Event(Base): __tablename__ = 'events' __table_args__ = {'schema': 'events', 'autoload': True} Unfortunately, this way I get: sqlalchemy.exc.UnboundExecutionError: No engine is bound to this Table's MetaData. Pass an engine to the Table via autoload_with=<someengine>, or associate the MetaData with an engine via metadata.bind=<someengine> The problem here is that I don't know where to get the engine from (to use it in autoload_with) at the stage of importing the model (it's available in init_model()). I tried adding meta.Base.metadata.bind(engine) to environment.py but it doesn't work. Anyone has found some elegant solution?

    Read the article

  • Using Range Function

    - by Michael Alexander Riechmann
    My goal is to make a program that takes an input (Battery_Capacity) and ultimately spits out a list of the (New_Battery_Capacity) and the Number of (Cycle) it takes for it ultimately to reach maximum capacity of 80. Cycle = range (160) Charger_Rate = 0.5 * Cycle Battery_Capacity = float(raw_input("Enter Current Capacity:")) New_Battery_Capacity = Battery_Capacity + Charger_Rate if Battery_Capacity < 0: print 'Battery Reading Malfunction (Negative Reading)' elif Battery_Capacity > 80: print 'Battery Reading Malfunction (Overcharged)' elif float(Battery_Capacity) % 0.5 !=0: print 'Battery Malfunction (Charges Only 0.5 Interval)' while Battery_Capacity >= 0 and Battery_Capacity < 80: print New_Battery_Capacity I was wondering why my Cycle = range(160) isn't working in my program?

    Read the article

  • Matching strings

    - by Joy
    Write the function subStringMatchExact. This function takes two arguments: a target string, and a key string. It should return a tuple of the starting points of matches of the key string in the target string, when indexing starts at 0. Complete the definition for def subStringMatchExact(target,key): For example, subStringMatchExact("atgacatgcacaagtatgcat","atgc") would return the tuple (5, 15).

    Read the article

  • django on appengine

    - by aks
    I am impressed with django.Am am currenty a java developer.I want to make some cool websites for myself but i want to host it in some third pary environmet. Now the question is can i host the django application on appengine?If yes , how?? Are there any site built using django which are already hosted on appengine?

    Read the article

  • Getting unpredictable data into a tabular format

    - by Acorn
    The situation: Each page I scrape has <input> elements with a title= and a value= I don't know what is going to be on the page. I want to have all my collected data in a single table at the end, with a column for each title. So basically, I need each row of data to line up with all the others, and if a row doesn't have a certain element, then it should be blank (but there must be something there to keep the alignment). eg. First page has: {animal: cat, colour: blue, fruit: lemon, day: monday} Second page has: {animal: fish, colour: green, day: saturday} Third page has: {animal: dog, number: 10, colour: yellow, fruit: mango, day: tuesday} Then my resulting table should be: animal | number | colour | fruit | day cat | none | blue | lemon | monday fish | none | green | none | saturday dog | 10 | yellow | mango | tuesday Although it would be good to keep the order of the title value pairs, which I know dictionaries wont do. So basically, I need to generate columns from all the titles (kept in order but somehow merged together) What would be the best way of going about this without knowing all the possible titles and explicitly specifying an order for the values to be put in?

    Read the article

  • Pygame, sounds don't play

    - by terabytest
    I'm trying to play sound files (.wav) with pygame but when I start it I never hear anything. This is the code: import pygame pygame.init() pygame.mixer.init() sounda= pygame.mixer.Sound("desert_rustle.wav") sounda.play() I also tried using channels but the result is the same

    Read the article

  • how to login in google account with app engine webproxy

    - by user313446
    hi,a webproxy on app engine oncyberspace.appspot.com , save cookie in the database, when i try to login in the google with my account, it redirect to google.com . how to solve these problem ? and another problem , when i this the above web to login in twitter,it works !but i can not use it to update my tweet. i don't know why, may be i can't pass oauth . how to solve this ?

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • Django and mod_python intermittent error?

    - by Peter
    I have a Django site at http://sm.rutgers.edu/relive/af_api/index/. It is supposed to display "Home of the relive APIs". If you refresh this page many times, you can see different renderings. 1) The expected page. 2) Django "It worked!" page. 3) "ImportError at /index/" page. If you scroll down enough to ROOT_URLCONF part, you will see it says 'relive.urls'. But apparently, it should be 'af_api.urls', which is in my settings.py file. Since these results happen randomly, is it possible that either Django or mod_python is working unstably?

    Read the article

  • Django finding which field matched in a multiple OR query

    - by Greg Hinch
    I've got a couple models which are set up something like this: class Bar(models.Model): baz = models.CharField() class Foo(models.Model): bar1 = models.ForeignKey(Bar) bar2 = models.ForeignKey(Bar) bar3 = models.ForeignKey(Bar) And elsewhere in the code, I end up with an instance of Bar, and need to find the Foo it is attached to in some capacity. Right now I came up with doing a multiple OR query using Q, something like this: foo_inst = Foo.objects.get(Q(bar1=bar_inst) | Q(bar2=bar_inst) | Q(bar3=bar_inst)) What I need to figure out is, which of the 3 cases actually hit, at least the name of the member (bar1, bar2, or bar3). Is there a good way to do this? Is there a better way to structure the query to glean that information?

    Read the article

  • Sqlalchemy complex in_ clause

    - by lostlogic
    I'm trying to find a way to cause sqlalchemy to generate sql of the following form: select * from t where (a,b) in ((a1,b1),(a2,b2)); Is this possible? If not, any suggestions on a way to emulate it? Thanks kindly!

    Read the article

  • socket.accept error 24: To many open files

    - by Creotiv
    I have a problem with open files under my Ubuntu 9.10 when running server in Python2.6 And main problem is that, that i don't know why it so.. I have set ulimit -n = 999999 net.core.somaxconn = 999999 fs.file-max = 999999 and lsof gives me about 12000 open files when server is running. And also i'm using epoll. But after some time it's start giving exeption: File "/usr/lib/python2.6/socket.py", line 195, in accept error: [Errno 24] Too many open files And i don't know how it can reach file limit when it isn't reached. Thanks for help)

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • errors with gae-sessions and nose

    - by Kekito
    I'm running into a few problems with adding gae-sessions to a relatively mature GAE app. I followed the readme carefully and also looked at the demo. First, just adding the gaesesions directory to my app causes the following error when running tests with nose and nose-gae: Exception ImportError: 'No module named threading' in <bound method local.__del__ of <_threading_local.local object at 0x103e10628>> ignored All the tests run fine so not a big problem but suggests that something isn't right. Next, if I add the following two lines of code: from gaesessions import get_current_session session = get_current_session() I get the following error: Traceback (most recent call last): File "/Users/.../unit_tests.py", line 1421, in testParseFBRequest data = tasks.parse_fb_request(sr) File "/Users/.../tasks.py", line 220, in parse_fb_request session = get_current_session() File "/Users/.../gaesessions/__init__.py", line 36, in get_current_session return _tls.current_session File "/Library/.../python2.7/_threading_local.py", line 193, in __getattribute__ return object.__getattribute__(self, name) AttributeError: 'local' object has no attribute 'current_session' Any suggestions on fixing the above would be greatly appreciated.

    Read the article

  • Matplotlib autodatelocator custom date formatting?

    - by jawonlee
    I'm using Matplotlib to dynamically generate .png charts from a database. The user may set as the x-axis any given range of datetimes, and I need to account for all of it. While Matplotlib has the dates.AutoDateLocator(), I want the datetime format printed on the chart to be context-specific - e.g. if the user is charting from 3 p.m. to 5 p.m., the year/month/day information doesn't need to be displayed. Right now, I'm manually creating Locator and Formatter objects thusly: def get_ticks(start, end): from datetime import timedelta as td delta = end - start if delta <= td(minutes=10): loc = mdates.MinuteLocator() fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(minutes=30): loc = mdates.MinuteLocator(byminute=range(0,60,5)) fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(hours=1): loc = mdates.MinuteLocator(byminute=range(0,60,15)) fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(hours=6): loc = mdates.HourLocator() fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(days=1): loc = mdates.HourLocator(byhour=range(0,24,3)) fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(days=3): loc = mdates.HourLocator(byhour=range(0,24,6)) fmt = mdates.DateFormatter('%I:%M %p') elif delta <= td(weeks=2): loc = mdates.DayLocator() fmt = mdates.DateFormatter('%b %d') elif delta <= td(weeks=12): loc = mdates.WeekdayLocator() fmt = mdates.DateFormatter('%b %d') elif delta <= td(weeks=52): loc = mdates.MonthLocator() fmt = mdates.DateFormatter('%b') else: loc = mdates.MonthLocator(interval=3) fmt = mdates.DateFormatter('%b %Y') return loc,fmt Is there a better way of doing this?

    Read the article

  • Problem with anchor tags in Django after using lighttpd + fastcgi

    - by Drew A
    I just started using lighttpd and fastcgi for my django site, but I've noticed my anchor links are no longer working. I used the anchor links for sorting links on the page, for example I use an anchor to sort links by the number of points (or votes) they have received. For example: the code in the html template: ... {% load sorting_tags %} ... {% ifequal sort_order "points" %} {% trans "total points" %} {% trans "or" %} {% anchor "date" "date posted" %} {% order_by_votes links request.direction %} {% else %} {% anchor "points" "total points" %} {% trans "or" %} {% trans "date posted" %} ... The anchor link on "www.mysite.com/my_app/" for total points will be directed to "my_app/?sort=points" But the correct URL should be "www.mysite.com/my_app/?sort=points" All my other links work, the problem is specific to anchor links. The {% anchor %} tag is taken from django-sorting, the code can be found at http://github.com/directeur/django-sorting Specifically in django-sorting/templatetags/sorting_tags.py Thanks in advance.

    Read the article

  • Clean Method for a ModelForm in a ModelFormSet made by modelformset_factory

    - by Salyangoz
    I was wondering if my approach is right or not. Assuming the Restaurant model has only a name. forms.py class BaseRestaurantOpinionForm(forms.ModelForm): opinion = forms.ChoiceField(choices=(('yes', 'yes'), ('no', 'no'), ('meh', 'meh')), required=False, )) class Meta: model = Restaurant fields = ['opinion'] views.py class RestaurantVoteListView(ListView): queryset = Restaurant.objects.all() template_name = "restaurants/list.html" def dispatch(self, request, *args, **kwargs): if request.POST: queryset = self.request.POST.dict() #clean here return HttpResponse(json.dumps(queryset), content_type="application/json") def get_context_data(self, **kwargs): context = super(EligibleRestaurantsListView, self).get_context_data(**kwargs) RestaurantFormSet = modelformset_factory( Restaurant,form=BaseRestaurantOpinionForm ) extra_context = { 'eligible_restaurants' : self.get_eligible_restaurants(), 'forms' : RestaurantFormSet(), } context.update(extra_context) return context Basically I'll be getting 3 voting buttons for each restaurant and then I want to read the votes. I was wondering from where/which clean function do I need to call to get something like: { ('3' : 'yes'), ('2' : 'no') } #{ 'restaurant_id' : 'vote' } This is my second/third question so tell me if I'm being unclear. Thanks.

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >