Search Results

Search found 13542 results on 542 pages for 'python socketserver'.

Page 382/542 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • NZEC Run time Error Occured

    - by madan
    import math def gen_caller(a): for z in a: x,y=z if x==1: x=2 if y>=x and y-x<=100000: for i in range(x,y+1): flag=0 for j in range(2,(long(math.sqrt(i))+1)): if(i%j==0): flag=1 break if flag==0: print i print "" n=(int(raw_input())) gen_caller([[(long(raw_input())) for j in range(0,2)] for i in range(0,n) if n<=10])

    Read the article

  • How to check wether a path represented by a QString with german umlauts exists?

    - by MB
    Hey, i get a QString which represents a directory from a QLineEdit. Now i want to check wether a certain file exists in this directory. But if i try this with os.path.exists and os.path.join and get in trouble when german umlauts occur in the directory path: #the direcory coming from the user input in the QLineEdit #i take this QString to the local 8-Bit encoding and then make #a string from it target_dir = str(lineEdit.text().toLocal8Bit()) #the file name that should be checked for file_name = 'some-name.txt' #this fails with a UnicodeDecodeError when a umlaut occurs in target_dir os.path.exists(os.path.join(target_dir, file_name)) How would you check if the file exists, when you might encounter german umlauts?

    Read the article

  • How do I upload a files to google app engine app when field name is not known

    - by Michael Neale
    I have tried a few options, none of which seem to work (if I have a simple multipart form with a named field, it works well, but when I don't know the name I can't just grab all files in the request...). I have looked at http://stackoverflow.com/questions/81451/upload-files-in-google-app-engine and it doesn't seem suitable (or to actually work, as someone mentioned the code snipped it untested).

    Read the article

  • Trying to output a list using class

    - by captain morgan
    Am trying to get the moving average of a price..but i keep getting an attribute error in my Moving_Average class. ('Moving_Average' object has no attribute 'days'). Here is what I have: class Moving_Average: def calculation(self, alist:list,days:int): m = self.days prices = alist[1::2] average = [0]* len(prices) signal = ['']* len(prices) for m in range(0,len(prices)-days+1): average[m+2] = sum(prices[m:m+days])/days if prices[m+2] < average[m+2]: signal[m+2]='SELL' elif prices[m+2] > average[m+2] and prices[m+1] < average[m+1]: signal[m+2]='BUY' else: signal[m+2] ='' return average,signal def print_report(symbol:str,strategy:str): print('SYMBOL: ', symbol) print('STRATEGY: ', strategy) print('Date Closing Strategy Signal') def user(): strategy = ''' Which of the following strategy would you like to use? * Simple Moving Average [S] * Directional Indicator[D] Please enter your choice: ''' if signal_strategy in 'Ss': days = input('Please enter the number of days for the average') days = int(days) strategy = 'Simple Moving Average {}-days'.format(str(days)) m = Moving_Average() ma = m.calculation(gg, days) print(ma) gg is an list that contains date and prices. [2013-10-01,60,2013-10-02,60] The output is supposed to look like: Date Price Average Signal 2013-10-01 60.0 2013-10-02 60.0 60.00 BUY

    Read the article

  • Distance between numpy arrays, columnwise

    - by Jaapsneep
    I have 2 arrays in 2D, where the column vectors are feature vectors. One array is of size F x A, the other of F x B, where A << B. As an example, for A = 2 and F = 3 (B can be anything): arr1 = np.array( [[1, 4], [2, 5], [3, 6]] ) arr2 = np.array( [[1, 4, 7, 10, ..], [2, 5, 8, 11, ..], [3, 6, 9, 12, ..]] ) I want to calculate the distance between arr1 and a fragment of arr2 that is of equal size (in this case, 3x2), for each possible fragment of arr2. The column vectors are independent of each other, so I believe I should calculate the distance between each column vector in arr1 and a collection of column vectors ranging from i to i + A from arr2 and take the sum of these distances (not sure though). Does numpy offer an efficient way of doing this, or will I have to take slices from the second array and, using another loop, calculate the distance between each column vector in arr1 and the corresponding column vector in the slice?

    Read the article

  • Diminishing programmer wants to get back to programming

    - by Marcus TV
    I last programmed actively in 2002. It is almost 8 years now. I learned C and then moved to Visual Basic for our thesis project in the university. I would like to ask suggestions on what programming language should I learn and put to profitability use in areas such as desktop applications, web development, and database applications.

    Read the article

  • django admin site - filtering available objects for user

    - by JPG
    I have models that belong to some 'group' (Company class). I want to add users, who will also belong to a one group and should be able to edit/manage/add objects with membership in associated group. something like: class Company() class Something() company = ForeignKey(Company) user Microsoft_admin company = ForeignKey(Company) and this user should only see and edit objects belonging to associated Company in the Admin Interface. How to acomplish that?

    Read the article

  • Getting child elements that are related to a parent in same table

    - by Madawar
    I have the following database schema class posts(Base): __tablename__ = 'xposts' id = Column(Integer, primary_key=True) class Comments(Base): __tablename__ = 'comments' id = Column(Integer, primary_key=True) comment_parent_id=Column(Integer,unique=True) #comment_id fetches comment of a comment ie the comment_parent_id comment_id=Column(Integer,default=None) comment_text=Column(String(200)) Values in database are 1 12 NULL Hello First comment 2 NULL 12 First Sub comment I want to fetch all Comments and sub comments of a post using sqlalchemy and have this so far qry=session.query(Comments).filter(Comments.comment_parent_id!=None) print qry.count() Is there a way i can fetch the all the subcomments of a comment in a query i have tried outerjoin on the same table(comments) and it seemed stupid and it failed.

    Read the article

  • Wordpress & Django -- One domain, two servers. Possible?

    - by DomoDomo
    My question is about hosting Django and Wordpress under one domain, but two physical machines (actually, they are VMs but same diff). Let's say I have a Django webapp at example.com. I'd like to start a Wordpress blog about my webapp, so any blog page rank mojo flows back to my webapp, I'd like the blog address t be example.com/blog. My understanding is blog.example.com would not transfer said page rank mojo. Because I'm worried about Wordpress security flaws compromising my Django webapp, I want to host Django and Wordpress on two physically separate machines. Given all that, is it possible using re-write rules or a reverse proxy server to do this? I know the easy way is to make my Wordpress blog a subdomain, but I really don't want to do that. Has anyone done this in the past, is it stable? If I need a third server to be a dedicated reverse proxy, that's totally fine. Thanks!

    Read the article

  • Can this django query be improved?

    - by Hobhouse
    Given a model structure like this: class Book(models.Model): user = models.ForeignKey(User) class Readingdate(models.Model): book = models.ForeignKey(Book) date = models.DateField() One book may have several readingdates. How do I list books having at least one readingdate within a specific year? I can do this: from_date = datetime.date(2010,1,1) to_date = datetime.date(2010,12,31) book_ids = Readingdate.objects\ .filter(date__range=(from_date,to_date))\ .values_list('book_id', flat=True) books_read_2010 = Book.objects.filter(id__in=book_ids) Is it possible to do this with one queryset, or is this the best way?

    Read the article

  • Reset selection of wx.lib.calendar.Calendar control?

    - by Joseph
    I have a wx.lib.calendar.Calendar control (not wx.lib.calendar.CalendarCtrl!). I am selecting a number of days using the following function call: self.cal.AddSelect([days], 'green', 'white') This works, and draws the days highlighted. However, I cannot work out how to reverse this (i.e., clear the selection so the days go back to their normal colouring). Any hints, please?

    Read the article

  • Where is the help.py for Android's monkeyrunner

    - by Keyboardsurfer
    Hi, I just can't find the help.py file in order to create the API reference for the monkeyrunner. The command described at the Android references monkeyrunner <format> help.py <outfile> does not work when i call monkeyrunner html help.py /path/to/place/the/doc.html. It's quite obvious that the help.py file is not found and the monkeyrunner also tells me "Can't open specified script file". But a locate on my system doesn't bring me a help.py file that has anything to do with monkeyrunner or Android. So my question is: Where did they hide the help.py file for creating the API reference?

    Read the article

  • New wxpython controls not displaying until resize

    - by acrosman
    I have created a custom control (based on a panel) in wxPython that provides a list of custom controls on panel within it. The user needs to be able to add rows at will and have those rows displayed. I'm having trouble getting the new controls to actually appear after they are added. I know they are present, because they appear after a resize of the frame, or if I add them before Show() is called on the frame. I've convinced myself it's something basic, but I can't find the mistake. The add function looks like this: def addRow(self, id, reference, page, title, note): newRow = NoteListRow(self.listPanel, id, reference, page, title, note) self.listSizer.Add(newRow, flag=wx.EXPAND | wx.LEFT) self.rows.append(newRow) if len(self.rows) == 1: self.highliteRow(newRow) self.Refresh() self.Update() return newRow I assume I'm missing something about how refresh and update are supposed to behave, so even a good extended reference on those would likely be helpful.

    Read the article

  • Django Piston - how can I create custom methods?

    - by orokusaki
    I put my questions in the code comments for clarity: from piston.handler import AnonymousBaseHandler class AnonymousAPITest(AnonymousBaseHandler): fields = ('update_subscription',) def update_subscription(self, request, months): # Do some stuff here to update a subscription based on the # number of months provided. # How the heck can I call this method? return {'msg': 'Your subscription has been updated!'} def read(self, request): return { 'msg': 'Why would I need a read() method on a fully custom API?' }

    Read the article

  • Rewriting Live TCP/IP (Layer 4) (i.e. Socket Layer) Streams

    - by user213060
    I have a simple problem which I'm sure someone here has done before... I want to rewrite Layer 4 TCP/IP streams (Not lower layer individual packets or frames.) Ettercap's etterfilter command lets you perform simple live replacements of Layer 4 TCP/IP streams based on fixed strings or regexes. Example ettercap scripting code: if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "gzip")) { replace("gzip", " "); msg("whited out gzip\n"); } } if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "deflate")) { replace("deflate", " "); msg("whited out deflate\n"); } } http://ettercap.sourceforge.net/forum/viewtopic.php?t=2833 I would like to rewrite streams based on my own filter program instead of just simple string replacements. Anyone have an idea of how to do this? Is there anything other than Ettercap that can do live replacement like this, maybe as a plugin to a VPN software or something? I would like to have a configuration similar to ettercap's silent bridged sniffing configuration between two Ethernet interfaces. This way I can silently filter traffic coming from either direction with no NATing problems. Note that my filter is an application that acts as a pipe filter, similar to the design of unix command-line filters: >[eth0] <----------> [my filter] <----------> [eth1]< What I am already aware of, but are not suitable: Tun/Tap - Works at the lower packet layer, I need to work with the higher layer streams. Ettercap - I can't find any way to do replacements other than the restricted capabilities in the example above. Hooking into some VPN software? - I just can't figure out which or exactly how. libnetfilter_queue - Works with lower layer packets, not TCP/IP streams. Again, the rewriting should occur at the transport layer (Layer 4) as it does in this example, instead of a lower layer packet-based approach. Exact code will help immensely! Thanks!

    Read the article

  • problem installing mysqldb for python2.6

    - by apoorva
    Hi.. My mysql database is located on a remote machine... So i dont have any local copy of mysql on my local machine.. i get the registry key error... (file not found)... serverKey = _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, options['registry_key']) WindowsError: [Error 2] The system cannot find the file specified I think it requires to have a local copy of mysql... How do i install the mysqldb for database residing on another machine???

    Read the article

  • How to exclude results with get_object_or_404?

    - by googletorp
    In Django you can use the exclude to create SQL similar to not equal. An example could be. Model.objects.exclude(status='deleted') Now this works great and exclude is very flexible. Since I'm a bit lazy, I would like to get that functionality when using get_object_or_404, but I haven't found a way to do this, since you cannot use exclude on get_object_or_404. What I want is to do something like this: model = get_object_or_404(pk=id, status__exclude='deleted') But unfortunately this doesn't work as there isn't an exclude query filter or similar. The best I've come up with so far is doing something like this: object = get_object_or_404(pk=id) if object.status == 'deleted': return HttpResponseNotfound('text') Doing something like that, really defeats the point of using get_object_or_404, since it no longer is a handy one-liner. Alternatively I could do: object = get_object_or_404(pk=id, status__in=['list', 'of', 'items']) But that wouldn't be very maintainable, as I would need to keep the list up to date. I'm wondering if I'm missing some trick or feature in django to use get_object_or_404 to get the desired result?

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • Can I create class properties during __new__ or __init__?

    - by 007brendan
    I want to do something like this. The _print_attr function is designed to be called lazily, so I don't want to evaluate it in the init and set the value to attr. I would like to make attr a property that computes _print_attr only when accessed: class Base(object): def __init__(self): for attr in self._edl_uniform_attrs: setattr(self, attr, property(lambda self: self._print_attr(attr))) def _print_attr(self, attr): print attr class Child(Base): _edl_uniform_attrs = ['foo', 'bar'] me = Child() me.foo me.bar #output: #"foo" #"bar"

    Read the article

  • pandas read rotated csv files

    - by EricCoding
    Is there any function in pandas that can directly read a rotated csv file? To be specific, the header information in the first col instead of the first row. For example: A 1 2 B 3 5 C 6 7 and I would like the final DataFrame this way A B C 1 3 5 2 5 7 Of corse we can get around this problem using some data wangling techniques like transpose and slicing. I am wondering there should be a quick way in API but I could not find it.

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >