Search Results

Search found 13534 results on 542 pages for 'python 2 x'.

Page 372/542 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • SQLAlchemy Expression Language problem

    - by Torkel
    I'm trying to convert this to something sqlalchemy expression language compatible, I don't know if it's possible out of box and are hoping someone more experienced can help me along. The backend is PostgreSQL and if I can't make it as an expression I'll create a string instead. SELECT DISTINCT date_trunc('month', x.x) as date, COALESCE(b.res1, 0) AS res1, COALESCE(b.res2, 0) AS res2 FROM generate_series( date_trunc('year', now() - interval '1 years'), date_trunc('year', now() + interval '1 years'), interval '1 months' ) AS x LEFT OUTER JOIN( SELECT date_trunc('month', access_datetime) AS when, count(NULLIF(resource_id != 1, TRUE)) AS res1, count(NULLIF(resource_id != 2, TRUE)) AS res2 FROM tracking_entries GROUP BY date_trunc('month', access_datetime) ) AS b ON (date_trunc('month', x.x) = b.when) First of all I got a class TrackingEntry mapped to tracking_entries, the select statement within the outer joined can be converted to something like (pseudocode):: from sqlalchemy.sql import func, select from datetime import datetime, timedelta stmt = select([ func.date_trunc('month', TrackingEntry.resource_id).label('when'), func.count(func.nullif(TrackingEntry.resource_id != 1, True)).label('res1'), func.count(func.nullif(TrackingEntry.resource_id != 2, True)).label('res2') ], group_by=[func.date_trunc('month', TrackingEntry.access_datetime), ]) Considering the outer select statement I have no idea how to build it, my guess is something like: outer = select([ func.distinct(func.date_trunc('month', ?)).label('date'), func.coalesce(?.res1, 0).label('res1'), func.coalesce(?.res2, 0).label('res2') ], from_obj=[ func.generate_series( datetime.now(), datetime.now() + timedelta(days=365), timedelta(days=1) ).label(x) ]) Then I suppose I have to link those statements together without using foreign keys: outer.outerjoin(stmt???).??(func.date_trunc('month', ?.?), ?.when) Anyone got any suggestions or even better a solution?

    Read the article

  • cx_Oracle and output variables

    - by Tim
    I'm trying to do this again an Oracle 10 database: cursor = connection.cursor() lOutput = cursor.var(cx_Oracle.STRING) cursor.execute(""" BEGIN %(out)s := 'N'; END;""", {'out' : lOutput}) print lOutput.value but I'm getting DatabaseError: ORA-01036: illegal variable name/number Is it possible to define PL/SQL blocks in cx_Oracle this way?

    Read the article

  • Infinite loop when adding a row to a list in a class in python3

    - by Margaret
    I have a script which contains two classes. (I'm obviously deleting a lot of stuff that I don't believe is relevant to the error I'm dealing with.) The eventual task is to create a decision tree, as I mentioned in this question. Unfortunately, I'm getting an infinite loop, and I'm having difficulty identifying why. I've identified the line of code that's going haywire, but I would have thought the iterator and the list I'm adding to would be different objects. Is there some side effect of list's .append functionality that I'm not aware of? Or am I making some other blindingly obvious mistake? class Dataset: individuals = [] #Becomes a list of dictionaries, in which each dictionary is a row from the CSV with the headers as keys def field_set(self): #Returns a list of the fields in individuals[] that can be used to split the data (i.e. have more than one value amongst the individuals def classified(self, predicted_value): #Returns True if all the individuals have the same value for predicted_value def fields_exhausted(self, predicted_value): #Returns True if all the individuals are identical except for predicted_value def lowest_entropy_value(self, predicted_value): #Returns the field that will reduce <a href="http://en.wikipedia.org/wiki/Entropy_%28information_theory%29">entropy</a> the most def __init__(self, individuals=[]): and class Node: ds = Dataset() #The data that is associated with this Node links = [] #List of Nodes, the offspring Nodes of this node level = 0 #Tree depth of this Node split_value = '' #Field used to split out this Node from the parent node node_value = '' #Value used to split out this Node from the parent Node def split_dataset(self, split_value): fields = [] #List of options for split_value amongst the individuals datasets = {} #Dictionary of Datasets, each one with a value from fields[] as its key for field in self.ds.field_set()[split_value]: #Populates the keys of fields[] fields.append(field) datasets[field] = Dataset() for i in self.ds.individuals: #Adds individuals to the datasets.dataset that matches their result for split_value datasets[i[split_value]].individuals.append(i) #<---Causes an infinite loop on the second hit for field in fields: #Creates subnodes from each of the datasets.Dataset options self.add_subnode(datasets[field],split_value,field) def add_subnode(self, dataset, split_value='', node_value=''): def __init__(self, level, dataset=Dataset()): My initialisation code is currently: if __name__ == '__main__': filename = (sys.argv[1]) #Takes in a CSV file predicted_value = "# class" #Identifies the field from the CSV file that should be predicted base_dataset = parse_csv(filename) #Turns the CSV file into a list of lists parsed_dataset = individual_list(base_dataset) #Turns the list of lists into a list of dictionaries root = Node(0, Dataset(parsed_dataset)) #Creates a root node, passing it the full dataset root.split_dataset(root.ds.lowest_entropy_value(predicted_value)) #Performs the first split, creating multiple subnodes n = root.links[0] n.split_dataset(n.ds.lowest_entropy_value(predicted_value)) #Attempts to split the first subnode.

    Read the article

  • How do you determine an acceptable response time for App Engine DB requests?

    - by qiq
    According to this discussion of Google App Engine on Hacker News, A DB (read) request takes over 100ms on the datastore. That's insane and unusable for about 90% of applications. How do you determine what is an acceptable response time for a DB read request? I have been using App Engine without noticing any issues with DB responsiveness. But, on the other hand, I'm not sure I would even know what to look for in that regard :)

    Read the article

  • Get particular row as series from pandas dataframe

    - by Pratyush
    How do we get a particular filtered row as series? Example dataframe: >>> df = pd.DataFrame({'date': [20130101, 20130101, 20130102], 'location': ['a', 'a', 'c']}) >>> df date location 0 20130101 a 1 20130101 a 2 20130102 c I need to select the row where location is c as a series. I tried: row = df[df["location"] == "c"].head(1) # gives a dataframe row = df.ix[df["location"] == "c"] # also gives a dataframe with single row In either cases I can't the row as series.

    Read the article

  • SQLAlchemy introspection of ORM classes/objects

    - by Adam Batkin
    I am looking for a way to introspect SQLAlchemy ORM classes/entities to determine the types and other constraints (like maximum lengths) of an entity's properties. For example, if I have a declarative class: class User(Base): __tablename__ = "USER_TABLE" id = sa.Column(sa.types.Integer, primary_key=True) fullname = sa.Column(sa.types.String(100)) username = sa.Column(sa.types.String(20), nullable=False) password = sa.Column(sa.types.String(20), nullable=False) created_timestamp = sa.Column(sa.types.DateTime, nullable=False) I would want to be able to find out that the 'fullname' field should be a String with a maximum length of 100, and is nullable. And the 'created_timestamp' field is a DateTime and is not nullable.

    Read the article

  • How to use regular expressions to pull a substring? (screen scraping)

    - by Diego
    Hey guys, i'm really trying to understand regular expressions while scraping a site, i've been using it in my code enough to pull the following, but am stuck here. I need to quickly grab this: http://www.example.com/online/store/TitleDetail?detail&sku=123456789 from this: ('<a href="javascript:if(handleDoubleClick(this.id)){window.location=\'http://www.example.com/online/store/TitleDetail?detail&sku=123456789\';}" id="getTitleDetails_123456789">\r\n\t\t\t \tcheck store inventory\r\n\t\t\t </a>', 1) This is where I got confused. any ideas?

    Read the article

  • How do you access config outside of a request in CherryPy?

    - by OrganicPanda
    I've got a webapp running on CherryPy that needs to access the CherryPy config files before a user creates a request. The docs say to use: host = cherrypy.request.app.config['database']['host'] But that won't work outside of a user request. You can also use the application object when you start the app like so: ... application = cherrypy.tree.mount(root, '/', app_conf) host = application.config['database']['host'] ... But I can see no way of accessing 'application' from other classes outside of a user request. I ask because our app looks at several databases and we set them up when the app starts rather than on user request. I have a feeling this would be useful in other places too; so is there any way to store a reference to 'application' somewhere or access it through the CherryPy API?

    Read the article

  • Change text_factory in Django/sqlite

    - by Krumelur
    I have a django project that uses a sqlite database that can be written to by an external tool. The text is supposed to be UTF-8, but in some cases there will be errors in the encoding. The text is from an external source, so I cannot control the encoding. Yes, I know that I could write a "wrapping layer" between the external source and the database, but I prefer not having to do this, especially since the database already contains a lot of "bad" data. The solution in sqlite is to change the text_factory to something like: lambda x: unicode(x, "utf-8", "ignore") However, I don't know how to tell the Django model driver this.

    Read the article

  • Qt/PyQt dialog with togglable fullscreen mode - problem on Windows

    - by Guard
    I have a dialog created in PyQt. It's purpose and functionality don't matter. The init is: class MyDialog(QWidget, ui_module.Ui_Dialog): def __init__(self, parent=None): super(MyDialog, self).__init__(parent) self.setupUi(self) self.installEventFilter(self) self.setWindowFlags(Qt.Dialog | Qt.WindowTitleHint) self.showMaximized() Then I have event filtering method: def eventFilter(self, obj, event): if event.type() == QEvent.KeyPress: key = event.key() if key == Qt.Key_F11: if self.isFullScreen(): self.setWindowFlags(self._flags) if self._state == 'm': self.showMaximized() else: self.showNormal() self.setGeometry(self._geometry) else: self._state = 'm' if self.isMaximized() else 'n' self._flags = self.windowFlags() self._geometry = self.geometry() self.setWindowFlags(Qt.Tool | Qt.FramelessWindowHint) self.showFullScreen() return True elif key == Qt.Key_Escape: self.close() return QWidget.eventFilter(self, obj, event) As can be seen, Esc is used for dialog hiding, and F11 is used for toggling full-screen. In addition, if the user changed the dialog mode from the initial maximized to normal and possibly moved the dialog, it's state and position are restored after exiting the full-screen. Finally, the dialog is created on the MainWindow action triggered: d = MyDialog(self) d.show() It works fine on Linux (Ubuntu Lucid), but quite strange on Windows 7: if I go to the full-screen from the maximized mode, I can't exit full-screen (on F11 dialog disappears and appears in full-screen mode again). If I change the dialog's mode to Normal (by double-clicking its title), then go to full-screen and then return back, the dialog is shown in the normal mode, in the correct position, but without the title line. Most probably the reason for both cases is the same - the setWindowFlags doesn't work. But why? Is it also possible that it is the bug in the recent PyQt version? On Ubuntu I have 4.6.x from apt, and on Windows - the latest installer from the riverbank site.

    Read the article

  • Error while exiting cherrypy server

    - by Vijayendra Bapte
    Guys, I am getting following error while exiting cherrypy server. What is this error about? 2009-11-04 09:32:35,015 WARNING Error in atexit._run_exitfuncs: 2009-11-04 09:32:35,015 WARNING 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError: [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING Error in sys.exitfunc: 2009-11-04 09:32:35,015 WARNING Traceback (most recent call last): 2009-11-04 09:32:35,015 WARNING File "atexit.pyc", line 24, in _run_exitfuncs 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 1486, in shutdown 2009-11-04 09:32:35,015 WARNING File "logging\__init__.pyc", line 746, in flush 2009-11-04 09:32:35,015 WARNING IOError 2009-11-04 09:32:35,015 WARNING : 2009-11-04 09:32:35,015 WARNING [Errno 9] Bad file descriptor 2009-11-04 09:32:35,015 WARNING

    Read the article

  • Django finding which field matched in a multiple OR query

    - by Greg Hinch
    I've got a couple models which are set up something like this: class Bar(models.Model): baz = models.CharField() class Foo(models.Model): bar1 = models.ForeignKey(Bar) bar2 = models.ForeignKey(Bar) bar3 = models.ForeignKey(Bar) And elsewhere in the code, I end up with an instance of Bar, and need to find the Foo it is attached to in some capacity. Right now I came up with doing a multiple OR query using Q, something like this: foo_inst = Foo.objects.get(Q(bar1=bar_inst) | Q(bar2=bar_inst) | Q(bar3=bar_inst)) What I need to figure out is, which of the 3 cases actually hit, at least the name of the member (bar1, bar2, or bar3). Is there a good way to do this? Is there a better way to structure the query to glean that information?

    Read the article

  • Jython java call throws exception asking for 2 args when only one arg is coded

    - by clutch
    I have an Java method I want to call within my Jython servlet running on tomcat5. It looks like this: @SuppressWarnings("unchecked") public School loadByName(String name) { List<School> school; school = getHibernateTemplate().find("from " + getPersistentClass().getName() + " where name = ?", name); return uniqueResult(school); } I call it in Jython using: foobar = SchoolDAOHibernate.loadByName('Univeristy') It throws an error that says loadByName() expects 2 args; got 1. What other argument could it be looking for?

    Read the article

  • How can I load a sql "dump" file into sql alchemy

    - by JudoWill
    I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time. Is there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp. Any help would be greatly appreciated. Will EDIT: Since this seems important ... The dump file was created with a MySQL database and so it has quite a few commands/syntax that sqlite3 does not understand correctly.

    Read the article

  • Regular Expression Question

    - by zyq524
    I'm trying to use regular expression to extract the comments in the heading of a file. For example, the source code may look like: //This is an example file. //Please help me. #include "test.h" int main() //main function { ... } What I want to extract from the code are the first two lines, i.e. //This is an example file. //Please help me. Any idea?

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • Accessing CSR extension stack in M2Crypto

    - by Charles Duffy
    Howdy! I have a certificate signing request with an extension stack added. When building a certificate based on this request, I would like to be able to access that stack to use in creating the final certificate. However, while M2Crypto.X509.X509 has a number of helpers for accessing extensions (get_ext, get_ext_at and the like), M2Crypto.X509.Request appears to provide only a member for adding extensions, but no way to inspect the extensions already associated with a given object. Am I missing something here?

    Read the article

  • Is multi-level polymorphism possible in SQLAlchemy?

    - by Jace
    Is it possible to have multi-level polymorphism in SQLAlchemy? Here's an example: class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) entity_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': entity_type} class File(Entity): __tablename__ = 'files' id = Column(None, ForeignKey('entities.id'), primary_key=True) filepath = Column(Unicode(255), nullable=False) file_type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_identity': u'file', 'polymorphic_on': file_type) class Image(File): __mapper_args__ = {'polymorphic_identity': u'image'} __tablename__ = 'images' id = Column(None, ForeignKey('files.id'), primary_key=True) width = Column(Integer) height = Column(Integer) When I call Base.metadata.create_all(), SQLAlchemy raises the following error: NotImplementedError: Can't generate DDL for the null type IntegrityError: (IntegrityError) entities.entity_type may not be NULL. This error goes away if I remove the Image model and the polymorphic_on key in File. What gives? (Edited: the exception raised was wrong.)

    Read the article

  • Sqlalchemy layout with WSGI application

    - by TheDude
    I'm working on writing a small WSGI application using Bottle and SqlAlchemy and am confused on how the "layout" of my application should be in terms of SqlAlchemy. My confusion is with creating engines and sessions. My understanding is that I should only create one engine with the 'create_engine' method. Should I be creating an engine instance in the global namespace in some sort of singleton pattern and creating sessions based off of it? How have you done this in your projects? Any insight would be appreciated. The examples in the documentation dont seem to make this entirely clear (unless I'm missing something obvious). Any thoughts?

    Read the article

  • Efficiently generate a 16-character, alphanumeric string

    - by ensnare
    I'm looking for a very quick way to generate an alphanumeric unique id for a primary key in a table. Would something like this work? def genKey(): hash = hashlib.md5(RANDOM_NUMBER).digest().encode("base64") alnum_hash = re.sub(r'[^a-zA-Z0-9]', "", hash) return alnum_hash[:16] What would be a good way to generate random numbers? If I base it on microtime, I have to account for the possibility of several calls of genKey() at the same time from different instances. Or is there a better way to do all this? Thanks.

    Read the article

  • Unknown syntax error.

    - by matt1024
    Why do I get a syntax error running this code? If I remove the highlighted section (return cards[i]) I get the error highlighting the function call instead. Please help :) def dealcards(): for i in range(len(cards)): cards[i] = '' for j in range(8): cards[i] = cards[i].append(random.randint(0,9) return cards[i] print (dealcards())

    Read the article

  • How to create a HTML world map with GeoDjango ?

    - by pierre-guillaume-degans
    The GeoDjango tutorial explains how to insert world borders into a spatial database. I would like to create a world Map in HTML with these data, with both map and area tags. Something like that. I just don't know how to retrieve the coordinates for each country (required for the area's coords attribute). from world.models import WorldBorders for country in WorldBorders.objects.all(): print u'<area shape="poly" title="%s" alt="%s" coords="%s" />' % (v.name, v.name, "???") Thanks !

    Read the article

  • Conditional row coloring in a PocketPyGUI table (PythonCE)

    - by PabloG
    I'm working on a an PythonCE application, using the PocketPyGUI toolkit. I'm using the gui.Table control to display a large list of choices (addresses, codes and data associated), and I want to assign a different color to the rows that have been completed. Is there any way to colorize the rows given certain conditions? TIA, Pablo

    Read the article

  • SQLAlchemy: who is in charge of the "session"? ( and how to unit-test with sessions )

    - by Nick Perkins
    I need some guidance on how to use session objects with SQLAlchemy, and how to organize Unit Tests of my mapped objects. What I would like to able to do is something like this: thing = BigThing() # mapped object child = thing.new_child() # create and return a related object thing.save() # will also save the child object In order to achieve this, I was thinking of having the BigThing actually add itself ( and it's children ) to the database -- but maybe this not a good idea? One reason to add objects as soon as possible is Automatic id values that are assigned by the database -- the sooner they are available, the fewer problems there are ( right? ) What is the best way to manage session objects? Who is in charge of the session? Should it be created only when required? or saved for a long time? What about Unit Tests for my mapped objects?...how should the session be handled? Is it ever OK to have mapped objects just automatically add themselves to a database? or is that going to lead to trouble?

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >