Search Results

Search found 27655 results on 1107 pages for 'visual python'.

Page 548/1107 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Group Chat XMPP with Google App Engine

    - by David Shellabarger
    Google App Engine has a great XMPP service built in. One of the few limitations it has is that it doesn't support receiving messages from a group chat. That's the one thing I want to do with it. :( Can I run a 3rd party XMPP/Jabber server on App Engine that supports group chat? If so, which one?

    Read the article

  • How to call Twiter's Streaming/Filter Feed with urllib2/httplib?

    - by Simon
    Update: I switched this back from answered as I tried the solution posed in cogent Nick's answer and switched to Google's urlfetch: logging.debug("starting urlfetch for http://%s%s" % (self.host, self.url)) result = urlfetch.fetch("http://%s%s" % (self.host, self.url), payload=self.body, method="POST", headers=self.headers, allow_truncated=True, deadline=5) logging.debug("finished urlfetch") but unfortunately finished urlfetch is never printed - I see the timeout happen in the logs (it returns 200 after 5 seconds), but execution doesn't seem tor return. Hi All- I'm attempting to play around with Twitter's Streaming (aka firehose) API with Google App Engine (I'm aware this probably isn't a great long term play as you can't keep the connection perpetually open with GAE), but so far I haven't had any luck getting my program to actually parse the results returned by Twitter. Some code: logging.debug("firing up urllib2") req = urllib2.Request(url="http://%s%s" % (self.host, self.url), data=self.body, headers=self.headers) logging.debug("called urlopen for %s %s, about to call urlopen" % (self.host, self.url)) fobj = urllib2.urlopen(req) logging.debug("called urlopen") When this executes, unfortunately, my debug output never shows the called urlopen line printed. I suspect what's happening is that Twitter keeps the connection open and urllib2 doesn't return because the server doesn't terminate the connection. Wireshark shows the request being sent properly and a response returned with results. I tried adding Connection: close to my request header, but that didn't yield a successful result. Any ideas on how to get this to work? thanks -Simon

    Read the article

  • JSON serialization of Google App Engine models

    - by user111677
    I've been search for quite a while with no success. My project isn't using Django, is there a simple way to serialize App Engine models (google.appengine.ext.db.Model) into JSON or do I need to write my own serializer? My model class is fairly simple. For instance: class Photo(db.Model): filename = db.StringProperty() title = db.StringProperty() description = db.StringProperty(multiline=True) date_taken = db.DateTimeProperty() date_uploaded = db.DateTimeProperty(auto_now_add=True) album = db.ReferenceProperty(Album, collection_name='photo') Thanks in advance.

    Read the article

  • socket.shutdown vs socket.close

    - by Jason Baker
    I recently saw a bit of code that looked like this (with sock being a socket object of course): sock.shutdown(socket.SHUT_RDWR) sock.close() What exactly is the purpose of calling shutdown on the socket and then closing it? If it makes a difference, this socket is being used for non-blocking IO.

    Read the article

  • Getting "Object is read only" error when setting ClientCredentials in WCF

    - by Paul Mrozowski
    I have a proxy object generated by Visual Studio (client side) named ServerClient. I am attempting to set ClientCredentials.UserName.UserName/Password before opening up a new connection using this code: InstanceContext context = new InstanceContext(this); m_client = new ServerClient(context); m_client.ClientCredentials.UserName.UserName = "Sample"; As soon as the code hits the UserName line it fails with an "Object is read-only" error. I know this can happen if the connection is already open or faulted, but at this point I haven't called context.Open() yet. I have configured the Bindings (which uses netTcpBinding) to use Message as it's security mode, and MessageClientCredentialType is set to UserName. Any ideas?

    Read the article

  • Reliable and fast way to convert a zillion ODT files in PDF?

    - by Marco Mariani
    I need to pre-produce a million or two PDF files from a simple template (a few pages and tables) with embedded fonts. Usually, I would stay low level in a case like this, and compose everything with a library like ReportLab, but I joined late in the project. Currently, I have a template.odt and use markers in the content.xml files to fill with data from a DB. I can smoothly create the ODT files, they always look rigth. For the ODT to PDF conversion, I'm using openoffice in server mode (and PyODConverter w/ named pipe), but it's not very reliable: in a batch of documents, there is eventually a point after which all the processed files are converted into garbage (wrong fonts and letters sprawled all over the page). Problem is not predictably reproducible (does not depend on the data), happens in OOo 2.3 and 3.2, in Ubuntu, XP, Server 2003 and Windows 7. My Heisenbug detector is ticking. I tried to reduce the size of batches and restarting OOo after each one; still, a small percentage of the documents are messed up. Of course I'll write about this on the Ooo mailing lists, but in the meanwhile, I have a delivery and lost too much time already. Where do I go? Completely avoid the ODT format and go for another template system. Suggestions? Anything that takes a few seconds to run is way too slow. OOo takes around a second and it sums to 15 days of processing time. I had to write a program for clustering the jobs over several clients. Keep the format but go for another tool/program for the conversion. Which one? There are many apps in the shareware or commercial repositories for windows, but trying each one is a daunting task. Some are too slow, some cannot be run in batch without buying it first, some cannot work from command line, etc. Open source tools tend not to reinvent the wheel and often depend on openoffice. Converting to an intermediate .DOC format could help to avoid the OOo bug, but it would double the processing time and complicate a task that is already too hairy. Try to produce the PDFs twice and compare them, discarding the whole batch if there's something wrong. Although the documents look equal, I know of no way to compare the binary content. Restart OOo after processing each document. it would take a lot more time to produce them it would lower the percentage of the wrong files, and make it very hard to identify them. Go for ReportLab and recreate the pages programmatically. This is the approach I'm going to try in a few minutes. Learn to properly format bulleted lists Thanks a lot.

    Read the article

  • Convert list to sequence of variables

    - by wtzolt
    I was wondering if this was possible... I have a sequence of variables that have to be assigned to a do.something (a, b) a and b variables accordingly. Something like this: # # Have a list of sequenced variables. list = 2:90 , 1:140 , 3:-40 , 4:60 # # "Template" on where to assign the variables from the list. do.something (a,b) # # Assign the variables from the list in a sequence with possibility of "in between" functions like print and time.sleep() added. do.something (2,90) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (1,140) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (3,-40) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) do.something (4,60) time.sleep(1) print "Did something (%d,%d)" % (# # vars from list?) Any ideas?

    Read the article

  • NumPy: how to quickly normalize many vectors?

    - by EOL
    How can a list of vectors be elegantly normalized, in NumPy? Here is an example that does not work: from numpy import * vectors = array([arange(10), arange(10)]) # All x's, then all y's norms = apply_along_axis(linalg.norm, 0, vectors) # Now, what I was expecting would work: print vectors.T / norms # vectors.T has 10 elements, as does norms, but this does not work The last operation yields "shape mismatch: objects cannot be broadcast to a single shape". How can the normalization of the 2D vectors in vectors be elegantly done, with NumPy? Edit: Why does the above not work while adding a dimension to norms does work (as per my answer below)?

    Read the article

  • Displaying Google Calendar event data on FullCalendar

    - by aurealus
    I am using Google Calendar as a storage engine for a calendar system I am building, however, I am using a single Google user account with multiple calendars, i.e. each user on my system has their own calendar within the one user account. I'm able to create a calendar per user just fine, but I would like to have FullCalendar retrieve the events for display purposes, without manually getting the magic cookie url from Google Calendar settings. I would like to be able to retrieve it programmatically or 'proxy' the feed via an authenticated call to get event data that I'm doing in Django. $('#calendar').fullCalendar({ events: $.fullCalendar.gcalFeed( "http://www.google.com/calendar_url/" <-- or /my/event/feed/url ) });

    Read the article

  • Display additional data while iterating over a Django formset

    - by Jannis
    Hi, I have a list of soccer matches for which I'd like to display forms. The list comes from a remote source. matches = ["A vs. B", "C vs. D", "E vs, F"] matchFormset = formset_factory(MatchForm,extra=len(matches)) formset = MatchFormset() On the template side, I would like to display the formset with the according title (i.e. "A vs. B"). {% for form in formset.forms %} <fieldset> <legend>{{TITLE}}</legend> {{form.team1}} : {{form.team2}} </fieldset> {% endfor %} Now how do I get TITLE to contain the right title for the current form? Or asked in a different way: how do I iterate over matches with the same index as the iteration over formset.forms? Thanks for your input!

    Read the article

  • How to limit choice field options based on another choice field in django admin

    - by umnik700
    I have the following models: class Category(models.Model): name = models.CharField(max_length=40) class Item(models.Model): name = models.CharField(max_length=40) category = models.ForeignKey(Category) class Demo(models.Model): name = models.CharField(max_length=40) category = models.ForeignKey(Category) item = models.ForeignKey(Item) In the admin interface when creating a new Demo, after user picks category from the dropdown, I would like to limit the number of choices in the "items" drop-down. If user selects another category then the item choices should update accordingly. I would like to limit item choices right on the client, before it even hits the form validation on the server. This is for usability, because the list of items could be 1000+ being able to narrow it down by category would help to make it more manageable. Is there a "django-way" of doing it or is custom JavaScript the only option here?

    Read the article

  • Refining data stored in SQLite - how to join several contacts?

    - by Krab
    Problem background Imagine this problem. You have a water molecule which is in contact with other molecules (if the contact is a hydrogen bond, there can be 4 other molecules around my water). Like in the following picture (A, B, C, D are some other atoms and dots mean the contact). A B . . O / \ H H . . C D I have the information about all the dots and I need to eliminate the water in the center and create records describing contacts of A-C, A-D, A-B, B-C, B-D, and C-D. Database structure Currently, I have the following structure in the database: Table atoms: "id" integer PRIMARY KEY, "amino" char(3) NOT NULL, (HOH for water or other value) other columns identifying the atom Table contacts: "acceptor_id" integer NOT NULL, (the atom near to my hydrogen, here C or D) "donor_id" integer NOT NULL, (here A or B) "directness" char(1) NOT NULL, (this should be D for direct and W for water-mediated) other columns about the contact, such as the distance Current solution (insufficient) Now, I'm going through all the contacts which have donor.amino = "HOH". In this sample case, this would select contacts from C and D. For each of these selected contacts, I look up contacts having the same acceptor_id as is the donor_id in the currently selected contact. From this information, I create the new contact. At the end, I delete all contacts to or from HOH. This way, I am obviously unable to create C-D and A-B contacts (the other 4 are OK). If I try a similar approach - trying to find two contacts having the same donor_id, I end up with duplicate contacts (C-D and D-C). Is there a simple way to retrieve all six contacts without duplicates? I'm dreaming about some one page long SQL query which retrievs just these six wanted rows. :-) It is preferable to conserve information about who is donor where possible, but not strictly necessary. Big thanks to all of you who read this question to this point.

    Read the article

  • SQL Alchemy MVC and cross controller joins

    - by Khorkrak
    When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled? So for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. First of all do you even need these controllers? Should you instead just treat SQL Alchemy as "the" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. Well one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. Anyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. What's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and / or Order objects as needed. Thoughts?

    Read the article

  • Django: How to create a model dynamically just for testing

    - by muhuk
    I have a Django app that requires a settings attribute in the form of: RELATED_MODELS = ('appname1.modelname1.attribute1', 'appname1.modelname2.attribute2', 'appname2.modelname3.attribute3', ...) Then hooks their post_save signal to update some other fixed model depending on the attributeN defined. I would like to test this behaviour and tests should work even if this app is the only one in the project (except for its own dependencies, no other wrapper app need to be installed). How can I create and attach/register/activate mock models just for the test database? (or is it possible at all?) Solutions that allow me to use test fixtures would be great.

    Read the article

  • Configuration files for C in linux

    - by James
    Hi, I have an executable that run time should take configuration parameters from a script file. This way I dont need to re-compile the code for every configuration change. Right now I have all the configuration values in a .h file. Everytime I change it i need to re-compile. The platform is C, gcc under Linux. What is the best solution for this problem? I looked up on google and so XML, phthon and Lua bindings for C. Is using a separate scripting language the best approach? If so, which one would you recommend for my need? Thanks

    Read the article

  • FxCop giving a warning on private constructor CA1823 and CA1053

    - by Luis Sánchez
    I have a class that looks like the following: Public Class Utilities Public Shared Function blah(userCode As String) As String 'doing some stuff End Function End Class I'm running FxCop 10 on it and it says: "Because type 'Utilities' contains only 'static' ( 'Shared' in Visual Basic) members, add a default private constructor to prevent the compiler from adding a default public constructor." Ok, you're right Mr. FxCop, I'll add a private constructor: Private Utilities() Now I'm having: "It appears that field 'Utilities.Utilities' is never used or is only ever assigned to. Use this field or remove it." Any ideas of what should I do to get rid of both warnings?

    Read the article

  • VS2010 patce: why it's take so muce time to install it?

    - by Mendy
    Visual Studio 2010 RC has a few of patches release. For more information about them take a look here. What I'm expect from patch program, is to replace a few dll's of the program to a new fixed version of them. But when I run each of this 3 patches, they take a lot of time (5 minutes each), and you think that the program was frozen because the progress bar stay on the begging. This is question may not be so important, but it really interesting me to know, why this happens? It's really confusing to see that each VS2010 (or Microsoft in general) is frozen to 4-5 minutes.

    Read the article

  • Django model field value preprocessing before returning

    - by Satoru.Logic
    Hi, all. I have a Note model class like this: class Note(models.Model): author = models.ForeignKey(User, related_name='notes') content = NoteContentField(max_length=256) NoteContentField is a custom sub-class of CharField that override the to_python method in purpose of doing some twitter-text-conversion processing. class NoteContentField(models.CharField): __metaclass__ = models.SubfieldBase def to_python(self, value): value = super(NoteContentField, self).to_python(value) from ..utils import linkify return mark_safe(linkify(value)) However, this doesn't work. When I save a Note object like this: note = Note(author=request.use, content=form.cleaned_data['content']) The conversed value is saved into the database, which is not what I wanna see. Would you please tell me what's wrong with this? Thanks in advance.

    Read the article

  • Error when running MSpec - how do I troubleshoot?

    - by Tomas Lycken
    I am following this guide to installing and using MSpec, but at the step where he runs MSpec for the first time, I get the following error: Could not load file or assembly 'file:///[...]\Nehemiah\Nehemiah.Specs\bin\Debug\Nehemiah.Specs.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. I have - to my knowledge - done everything more or less exactly like he did up to this step, except where differences arise because he's using VS2008 and I'm using VS2010, and everything has worked so far. The project Nehemijah.Specs (and the entire solution) builds without problem, both in Visual Studio and on my build server, and I can't find anything useful in Event Viewer (although I might not be looking in the right place here...) What to do?

    Read the article

  • How to obtain the root of a tree without parsing the entire file?

    - by Matt.
    I'm making an xml parser to parse xml reports from different tools, and each tool generates different reports with different tags. For example: Arachni generates an xml report with <arachni_report></arachni_report> as tree root tag. nmap generates an xml report with <nmaprun></nmaprun> as tree root tag. I'm trying not to parse the entire file unless it's a valid report from any of the tools I want. First thing I thought to use was ElementTree, parse the entire xml file (supposing it contains valid xml), and then check based on the tree root if the report belongs to Arachni or nmap. I'm currently using cElementTree, and as far as I know getroot() is not an option here, but my goal is to make this parser to operate with recognized files only, without parsing unnecessary files. By the way, I'm Still learning about xml parsing, thanks in advance.

    Read the article

  • load a pickle file from a zipfile

    - by eric.frederich
    For some reason I cannot get cPickle.load to work on the file-type object returned by ZipFile.open(). If I call read() on the file-type object returned by ZipFile.open() I can use cPickle.loads though. Example .... import zipfile import cPickle # the data we want to store some_data = {1: 'one', 2: 'two', 3: 'three'} # # create a zipped pickle file # zf = zipfile.ZipFile('zipped_pickle.zip', 'w', zipfile.ZIP_DEFLATED) zf.writestr('data.pkl', cPickle.dumps(some_data)) zf.close() # # cPickle.loads works # zf = zipfile.ZipFile('zipped_pickle.zip', 'r') sd1 = cPickle.loads(zf.open('data.pkl').read()) zf.close() # # cPickle.load doesn't work # zf = zipfile.ZipFile('zipped_pickle.zip', 'r') sd2 = cPickle.load(zf.open('data.pkl')) zf.close() Note: I don't want to zip just the pickle file but many files of other types. This is just an example.

    Read the article

  • Force VSProps settings to override project settings

    - by Steve
    I have a vsprops file that defines the optimizations all of our projects should be built with for Visual Studio 2008. If I set the properties for the project to "inherit from parent of project defaults" it works, and fills them in the vcproj file. However, this doesn't protect me from a developer checking in a project file that changes the optimizations. In this case, the project settings are used over the vsprops settings. I need to make it so that vsprops always takes precedence over what is in the vcproj file. Is this possible? Other workarounds are also welcome.

    Read the article

  • DragDrop registration did not succeed in Setup Project

    - by rodnower
    Hello, we have some installation project in Visual Studio solution (Other project types - Setup and deployment - Setup project). This project have other library type project with Installation class named InstallationCore like project output. In user action I call to Install and Uninstall functions of installer of InstallationCore. InstallationCore has windows forms for interaction with user. There, in forms, I use Drag and Drop functionality for Drag and Drop text from Tree View to Text Box. But in line: txbUserName.AllowDrop = true; I get error of JIT debugger: Unhandled exception has occured DragDrop registration did not succeed System.InvalidOperationException: DragDrop registration did not succeed And long stack trace after that. Important to say, that when I run Installer function from test project the error did not occur and all work fine. Error occurs only when I run the .msi package. Any suggestions? Thank you for ahead.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >