Search Results

Search found 13534 results on 542 pages for 'python 3 3'.

Page 397/542 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • how to fetch more than 1000 entities NON keybased?

    - by user291071
    If I should be approaching this problem through a different method, please suggest so. I am creating an item based collaborative filter. I populate the db with the LinkRating2 class and for each link there are more than a 1000 users that I need to call and collect their ratings to perform calculations which I then use to create another table. So I need to call more than 1000 entities for a given link. For instance lets say there are over a 1000 users rated 'link1' there will be over a 1000 instances of this class for the given link property that I need to call. How would I complete this example? class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() query =LinkRating2.all() link1 = 'link string name' a = query.filter('link = ', link1) aa = a.fetch(1000)##how would i get more than 1000 for a given link1 as shown? ##keybased over 1000 in other post example i need method for a subset though not key class MyModel(db.Expando): @classmethod def count_all(cls): """ Count *all* of the rows (without maxing out at 1000) """ count = 0 query = cls.all().order('__key__') while count % 1000 == 0: current_count = query.count() if current_count == 0: break count += current_count if current_count == 1000: last_key = query.fetch(1, 999)[0].key() query = query.filter('__key__ > ', last_key) return count

    Read the article

  • Django filter by hour

    - by DJPy
    I've found that link: http://code.djangoproject.com/attachment/ticket/8424/time_filters.diff and changed my django 1.2 files by adding taht what you can see there. But now, when I'm trying to write Entry.objects.filter(pub_date__hour = x) - the result is following error: Field has invalid lookup: hour What should I do else, to make it work? (sorry for my english)

    Read the article

  • Clear sqlalchemy reflection cache

    - by OrganicPanda
    Hi all, I'm using sqlalchemy's reflection tools to get a Table object. I do this because these tables are dynamic and tables/columns can change. Here's the code I'm using: def getTableByReflection(self, tableName, metadata, engine): return Table(tableName, metadata, autoload = True, autoload_with = engine) The problem is that when the above code is run twice it seems to return the same results regardless of whether or not the columns have changed. I have tried refreshing using the mysession.refresh(mytable) but that fails because the table is not attached to any metadata - which makes sense but then why am I seeing cached results? Is there any way to tell the metadata/engine/session to forget about this table and let me load it cleanly?

    Read the article

  • Can Django be used for web services?

    - by alex
    My friend said, "Pylons is so much better for web services." My other friend said, "You can modify Django in a way to do exactly whatever you like." In Django, what is necessary to be modified (urls.py? models classes? settings?) in order to do "web services" with APIs and REST and versioning, etc etc.?

    Read the article

  • Transferring binary file from web server to client

    - by Yan Cheng CHEOK
    Usually, when I want to transfer a web server text file to client, here is what I did import cgi print "Content-Type: text/plain" print "Content-Disposition: attachment; filename=TEST.txt" print filename = "C:\\TEST.TXT" f = open(filename, 'r') for line in f: print line Works very fine for ANSI file. However, say, I have a binary file a.exe (This file is in web server secret path, and user shall not have direct access to that directory path). I wish to use the similar method to transfer. How I can do so? What content-type I should use? Using print seems to have corrupted content received at client side. What is the correct method?

    Read the article

  • A faster alternative to Pandas `isin` function

    - by user3576212
    I have a very large data frame df that looks like: ID Value1 Value2 1345 3.2 332 1355 2.2 32 2346 1.0 11 3456 8.9 322 And I have a list that contains a subset of IDs ID_list. I need to have a subset of df for the ID contained in ID_list. Currently, I am using df_sub=df[df.ID.isin(ID_list)] to do it. But it takes a lot time. IDs contained in ID_list doesn't have any pattern, so it's not within certain range. (And I need to apply the same operation to many similar dataframes. I was wondering if there is any faster way to do this. Will it help a lot if make ID as the index? Thanks!

    Read the article

  • GQL Request BadArgument Error. How to get around with my case?

    - by awegawef
    My query is essentially the following: entries=Entry.all().order("-votes").order("-date").filter("votes >", VOTE_FILTER).fetch(PAGE_SIZE+1, page* PAGE_SIZE) I want to grab N of the latest entries that have a voting score above some benchmark (VOTE_FILTER). Google currently says that I cannot filter on 'votes' because I order by 'date.' I don't see a way that I can do this the way I want to, so I'd appreciate any advice.

    Read the article

  • Online Game programming in Google App Engine: AI

    - by Hortinstein
    I am currently in the planning stages of a game for google app engine, but cannot wrap my head around how I am going to handle AI. I intend to have persistant NPCs that will move about the map, but short of writing a program that generates the same XML requests I use to control player actions, than run it on another server I am stuck on how to do it. I have looked at the Task Queue feature, but due to long running processes not being an option on the App engine, I am a little stuck. I intend to run multiple server instances with 200+ persistant NPC entities that I will need to update. Most action is slowly roaming around based on player movements/concentrations, and attacking close range players(you can probably guess the type of game im developing)

    Read the article

  • make a tree based on the key of each element in list.

    - by cocobear
    >>> s [{'000000': [['apple', 'pear']]}, {'100000': ['good', 'bad']}, {'200000': ['yeah', 'ogg']}, {'300000': [['foo', 'foo']]}, {'310000': [['#'], ['#']]}, {'320000': ['$', ['1']]}, {'321000': [['abc', 'abc']]}, {'322000': [['#'], ['#']]}, {'400000': [['yeah', 'baby']]}] >>> for i in s: ... print i ... {'000000': [['apple', 'pear']]} {'100000': ['good', 'bad']} {'200000': ['yeah', 'ogg']} {'300000': [['foo', 'foo']]} {'310000': [['#'], ['#']]} {'320000': ['$', ['1']]} {'321000': [['abc', 'abc']]} {'322000': [['#'], ['#']]} {'400000': [['yeah', 'baby']]} i want to make a tree based on the key of each element in list. result in logic will be: {'000000': [['apple', 'pear']]} {'100000': ['good', 'bad']} {'200000': ['yeah', 'ogg']} {'300000': [['foo', 'foo']]} {'310000': [['#'], ['#']]} {'320000': ['$', ['1']]} {'321000': [['abc', 'abc']]} {'322000': [['#'], ['#']]} {'400000': [['yeah', 'baby']]} perhaps a nested list can implement this or I need a tree type?

    Read the article

  • can I put my sqlite connection and cursor in a function?

    - by steini
    I was thinking I'd try to make my sqlite db connection a function instead of copy/pasting the ~6 lines needed to connect and execute a query all over the place. I'd like to make it versatile so I can use the same function for create/select/insert/etc... Below is what I have tried. The 'INSERT' and 'CREATE TABLE' queries are working, but if I do a 'SELECT' query, how can I work with the values it fetches outside of the function? Usually I'd like to print the values it fetches and also do other things with them. When I do it like below I get an error Traceback (most recent call last): File "C:\Users\steini\Desktop\py\database\test3.py", line 15, in <module> for row in connection('testdb45.db', "select * from users"): ProgrammingError: Cannot operate on a closed database. So I guess the connection needs to be open so I can get the values from the cursor, but I need to close it so the file isn't always locked. Here's my testing code: import sqlite3 def connection (db, arg): conn = sqlite3.connect(db) conn.execute('pragma foreign_keys = on') cur = conn.cursor() cur.execute(arg) conn.commit() conn.close() return cur connection('testdb.db', "create table users ('user', 'email')") connection('testdb.db', "insert into users ('user', 'email') values ('joey', 'foo@bar')") for row in connection('testdb45.db', "select * from users"): print row How can I make this work?

    Read the article

  • Why will this for loop not return one field from list rather than the list?

    - by Dick Eshelman
    import csv """sample row = 10/6/2010,73.42,74.43,72.9,74.15,2993500""" filename_in = 'c:/python27/scripts/fiverows.csv' reader = csv.reader(open(filename_in, "rb"), dialect="excel", delimiter="\t", quoting =csv.QUOTE_MINIMAL) for row in reader: for item in row: print 'row = ',row print 'item = ', item When you run this script and print the row you get the sample row returned in [] as a list. When you print the item you get the sample row as an unquoted string. Why do I not get each field ie, (10/6/2010), (73.42), etc. returned as an item? How do I return a single item?

    Read the article

  • how to change headers data

    - by Moayyad Yaghi
    hello i have the following class class AssetTableModel(QtCore.QAbstractTableModel): def init(self,filename=''): super(AssetTableModel,self).init() self.fileName=filename self.dirty = False self.assets = [] self.setHeaderData(0,QtCore.Qt.Horizontal,QtCore.QVariant('moayyad'),QtCore.Qt.EditRole) and i need to change the headers of the columns or the rows ,i used ( self.setHeaderdata()) but its not working ,i have a table that consistes of 2 columns and 2 rows only. is there any other function that changes headers ??. please help thanx in adnvance

    Read the article

  • threading.Event wait function not signaled when subclassing Process class

    - by user1313404
    For following code never gets past the wait function in run. I'm certain I'm doing something ridiculously stupid, but since I'm not smart enough to figure out what, I'm asking. Any help is appreciated. Here is the code: import threading import multiprocessing from multiprocessing import Process class SomeClass(Process): def __init__(self): Process.__init__(self) self.event = threading.Event() self.event.clear() def continueExec(self): print multiprocessing.current_process().name print self print "Set:" + str(self.event.is_set()) self.event.set() print "Set:" + str(self.event.is_set()) def run(self): print "I'm running with it" print multiprocessing.current_process().name self.event.wait() print "I'm further than I was" print multiprocessing.current_process().name self.event.clear() def main(): s_list = [] for t in range(3): s = SomeClass() print "s:" + str(s) s_list.append(s) s.start() raw_input("Press enter to send signal") for t in range(3): print "s_list["+str(t)+"]:" + str(s_list[t]) s_list[t].continueExec() raw_input("Press enter to send signal") for t in range(3): s_list[t].join() print "All Done" if __name__ == "__main__": main()

    Read the article

  • How do I use multiple settings file in Django with multiple sites on one server?

    - by William Bing Hua
    I have an ec2 instance running Ubuntu 14.04 and I want to host two sites from it. On my first site I have two settings file, production_settings.py and settings.py (for local development). I import the local settings into the production settings and override any settings with the production settings file. Since my production settings file is not the default settings.py name, I have to create an environment variable DJANGO_SETTINGS_MODULE='site1.production_settings' However because of this whenever I try to start my second site it says No module named site1.production_settings I am assuming that this is due to me setting the environment variable. Another problem is that I won't be able to use different settings file for different sites. How do I start use two different settings file for two different websites?

    Read the article

  • Doing a count over a filter query efficiently in django

    - by apple_pie
    Hello, Django newbie here, I need to do a count over a certain filter in a django model. If I do it like so: my_model.objects.filter(...).count() I'm guessing it does the SQL query that retrieves all the rows and only afterwards does the count. To my knowledge it's much more efficient to do the count without retrieving those rows like so "SELECT COUNT(*) FROM ...". Is there a way to do so in django?

    Read the article

  • Celery / Django Single Tasks being run multiple times

    - by felix001
    I'm facing an issue where I'm placing a task into the queue and it is being run multiple times. From the celery logs I can see that the same worker is running the task ... [2014-06-06 15:12:20,731: INFO/MainProcess] Received task: input.tasks.add_queue [2014-06-06 15:12:20,750: INFO/Worker-2] starting runner.. [2014-06-06 15:12:20,759: INFO/Worker-2] collection started [2014-06-06 15:13:32,828: INFO/Worker-2] collection complete [2014-06-06 15:13:32,836: INFO/Worker-2] generation of steps complete [2014-06-06 15:13:32,836: INFO/Worker-2] update created [2014-06-06 15:13:33,655: INFO/Worker-2] email sent [2014-06-06 15:13:33,656: INFO/Worker-2] update created [2014-06-06 15:13:34,420: INFO/Worker-2] email sent [2014-06-06 15:13:34,421: INFO/Worker-2] FINISH - Success However when I view the actual logs of the application it is showing 5-6 log lines for each step (??). Im using Django 1.6 with RabbitMQ. The method for placing into the queue is via placing a delay on a function. This function (task decorator is added( then calls a class which is run. Has anyone any idea on the best way to troubleshoot this ? Edit : As requested heres the code, views.py In my view im sending my data to the queue via ... from input.tasks import add_queue_project add_queue_project.delay(data) tasks.py from celery.decorators import task @task() def add_queue_project(data): """ run project """ logger = logging_setup(app="project") logger.info("starting project runner..") f = project_runner(data) f.main() class project_runner(): """ main project runner """ def __init__(self,data): self.data = data self.logger = logging_setup(app="project") def self.main(self): .... Code settings.py THIRD_PARTY_APPS = ( 'south', # Database migration helpers: 'crispy_forms', # Form layouts 'rest_framework', 'djcelery', ) import djcelery djcelery.setup_loader() BROKER_HOST = "127.0.0.1" BROKER_PORT = 5672 # default RabbitMQ listening port BROKER_USER = "test" BROKER_PASSWORD = "test" BROKER_VHOST = "test" CELERY_BACKEND = "amqp" # telling Celery to report the results back to RabbitMQ CELERY_RESULT_DBURI = "" CELERY_IMPORTS = ("input.tasks", ) celeryd The line im running is to start celery, python2.7 manage.py celeryd -l info Thanks,

    Read the article

  • [Errno 10061] No connection could be made because the target machine actively refused it

    - by user551717
    I've tried to connect to my local machine every time I try and run my program. I am a nub, so it's probably a simple mistake somewhere. def connect(self): self.conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.conn.connect((self.host,self.port)) That is the code causing the error. The host and port are defined. Why is it giving me this error report? [Errno 10061] No connection could be made because the target machine actively refused it

    Read the article

  • Django: Get bound IP address inside settings.py

    - by Silver Light
    Hello! I want to enable debug (DEBUG = True) For my Django project only if it runs on localhost. How can I get user IP address inside settings.py? I would like something like this to work: #Debugging only on localhost if user_ip = '127.0.0.1': DEBUG = True else: DEBUG = False How do I put user IP address in user_ip variable inside settings.py file?

    Read the article

  • Fast image coordinate lookup in Numpy

    - by victor
    I've got a big numpy array full of coordinates (about 400): [[102, 234], [304, 104], .... ] And a numpy 2d array my_map of size 800x800. What's the fastest way to look up the coordinates given in that array? I tried things like paletting as described in this post: http://opencvpython.blogspot.com/2012/06/fast-array-manipulation-in-numpy.html but couldn't get it to work. I was also thinking about turning each coordinate into a linear index of the map and then piping it straight into my_map like so: my_map[linearized_coords] but I couldn't get vectorize to properly translate the coordinates into a linear fashion. Any ideas?

    Read the article

  • subplot matplotlib wrong syntax

    - by madptr
    I am using matplotlib to subplot in a loop. For instance, i would like to subplot 49 data sets, and from the doc, i implemented it this way; import numpy as np import matplotlib.pyplot as plt X1=list(range(0,10000,1)) X1 = [ x/float(10) for x in X1 ] nb_mix = 2 parameters = [] for i in range(49): param = [] Y = [0] * len(X1) for j in range(nb_mix): mean = 5* (1 + (np.random.rand() * 2 - 1 ) * 0.5 ) var = 10* (1 + np.random.rand() * 2 - 1 ) scale = 5* ( 1 + (np.random.rand() * 2 - 1) * 0.5 ) Y = [ Y[k] + scale * np.exp(-((X1[k] - mean)/float(var))**2) for k in range(len(X1)) ] param = param + [[mean, var, scale]] ax = plt.subplot(7, 7, i + 1) ax.plot(X1, Y) parameters = parameters + [param] ax.show() However, i have an index out of range error from i=0 onwards. Where can i do better to have it works ? Thanks

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >