Search Results

Search found 27655 results on 1107 pages for 'visual python'.

Page 598/1107 | < Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >

  • How can I disable a model field in a django form

    - by jammon
    I have a model like this: class MyModel(models.Model): REGULAR = 1 PREMIUM = 2 STATUS_CHOICES = ((REGULAR, "regular"), (PREMIUM, "premium")) name = models.CharField(max_length=30) status = models.IntegerField(choices = STATUS_CHOICES, default = REGULAR) class MyForm(forms.ModelForm): class Meta: model = models.MyModel In a view I initialize one field and try to make it non-editable: myform = MyForm(initial = {'status': requested_status}) myform.fields['status'].editable = False But the user can still change that field. What's the real way to accomplish what I'm after?

    Read the article

  • Filter objects within two seconds of one another using SQLAlchemy

    - by Arrieta
    Hello: I have two tables with a column 'date'. One holds (name, date) and the other holds (date, p1, p2). Given a name, I want to use the date in table 1 to query p1 and p2 from table two; the match should happen if date in table one is within two seconds of date in table two. How can you accomplish this using SQLAlchemy? I've tried (unsuccessfully) to use the between operator and with a clause like: td = datetime.timedelta(seconds=2) q = session.query(table1, table2).filter(table1.name=='my_name').\ filter(between(table1.date, table2.date - td, table2.date + td)) Any thoughts?

    Read the article

  • OpenGL embedded in gtk has colour badly displayed

    - by Sardathrion
    Note that this is a re-write now that I have more clues as to where the problem could be... I am creating a GTK GUI which contains two embedded OpenGL displays. Both use the same shader code (complied once for each). On my normal hardware, this works fine. On a virtual machine running on the same hardware, I get horrible colours -- see images. I suspect that the shader code is at fault -- certainly dropping a simpler shader does make the problem moot. However, I do need both diffuse and spot lights in my shader thus making it non-trivial. Anyone has seen this before?

    Read the article

  • .NET 4.0 Debugging Behavior

    - by Jason
    We recently migrated to VS 2010. We installed .NET 4.0 on our test machine. When we execute a console application that throws an exception, we no longer see the exception message and stack trace printed to the console but instead see the message An unhandled win32 exception occurred in something.exe [PID]. Just-In-Time debugging this exception failed with the following error: No installed debugger has Just-In-Time debugging enabled. In Visual Studio, Just-In-Time debugging can be enabled from Tools/Options/Debugging/Just-In-Time. We do have the above setting enabled. What do we need to do to return to the behavior we had previously?

    Read the article

  • How fast are App Engine db.get(keys) and A.all(keys_only=True).filter('b =', b).fetch(1000)?

    - by Liron Shapira
    A db.get() of 50 keys seems to take me 5-6 seconds. Is that normal? What is the time a function of? I also did a A.all(keys_only=True).filter('b =', b).fetch(1000) where A.b is a ReferenceProperty. I did 50 such round trips to the datastore, with different values of b, and the total time was only 3-4 seconds. How is this possible? db.get() is done in parallel, with only one trip to the datastore, and I would think that looking up an entity by key is a faster operation than fetch.

    Read the article

  • Lucene: Fastest way to return the document occurance of a phrase?

    - by dont say the kid's name
    Hi Guys, I am trying to use Lucene (actually PyLucene!) to find out how many documents contain my exact phrase. My code currently looks like this... but it runs rather slow. Does anyone know a faster way to return document counts? phraseList = ["some phrase 1", "some phrase 2"] #etc, a list of phrases... countsearcher = IndexSearcher(SimpleFSDirectory(File(STORE_DIR)), True) analyzer = StandardAnalyzer(Version.LUCENE_CURRENT) for phrase in phraseList: query = QueryParser(Version.LUCENE_CURRENT, "contents", analyzer).parse("\"" + phrase + "\"") scoreDocs = countsearcher.search(query, 200).scoreDocs print "count is: " + str(len(scoreDocs))

    Read the article

  • Best practice- How to team-split a django project while still allowing code reusal

    - by Infinity
    I know this sounds kind of vague, but please let me explain- I'm starting work on a brand new project, it will have two main components: "ACME PRODUCT" (think Gmail, Meebo, etc), and "THE SITE" (help, information, marketing stuff, promotional landing pages, etc lots of marketing-induced cruft). So basically the url /acme/* will load stuff in the uber cool ajaxy application, and every other URI will load stuff in the other site. Problem: "THE SITE" component is out of my hands, and will be handled by a consultants team that will work closely with marketing, And I and my team will work solely on the ACME PRODUCT. Question: How to set up the django project in such a way that we can have: Seperate releases. (They can push new marketing pages and functionality without having to worry about the state of our code. Maybe even separate Subversion "projects") Minimize impact (on our product) of whatever flying-unicorns-hocus-pocus the other team codes into the site. Still allow some code reusal. My main concern is that the ACME product needs to be rock solid, and therefore needs to be somewhat isolated of whatever mistakes/code bloopers the consultants make in their marketing side of the site. How have you handled this? Any ideas? Thanks!

    Read the article

  • How to manage feeds with subclassed object in Django 1.2?

    - by Matteo
    Hi, I'm trying to generate a feed rss from a model like this one, selecting all the Entry objects: from django.db import models from django.contrib.sites.models import Site from django.contrib.auth.models import User from imagekit.models import ImageModel import datetime class Entry(ImageModel): date_pub = models.DateTimeField(default=datetime.datetime.now) author = models.ForeignKey(User) via = models.URLField(blank=True) comments_allowed = models.BooleanField(default=True) icon = models.ImageField(upload_to='icon/',blank=True) class IKOptions: spec_module = 'journal.icon_specs' cache_dir = 'icon/resized' image_field = 'icon' class Post(Entry): title = models.CharField(max_length=200) description = models.TextField() slug = models.SlugField(unique=True) def __unicode__(self): return self.title class Photo(Entry): alt = models.CharField(max_length=200) description = models.TextField(blank=True) original = models.ImageField(upload_to='photo/') class IKOptions: spec_module = 'journal.photo_specs' cache_dir = 'photo/resized' image_field = 'original' def __unicode__(self): return self.alt class Quote(Entry): blockquote = models.TextField() cite = models.TextField(blank=True) def __unicode__(self): return self.blockquote When I use the render_to_response in my views I simply call: def get_journal_entries(request): entries = Entry.objects.all().order_by('-date_pub') return render_to_response('journal/entries.html', {'entries':entries}) And then I use a conditional template to render the right snippets of html: {% extends "base.html" %} {% block main %} <hr> {% for entry in entries %} {% if entry.post %}[...]{% endif %}[...] But I cannot do the same with the Feed Framework in django 1.2... Any suggestion, please?

    Read the article

  • How to handle images folder with many images

    - by Billy
    I'm developing a new aspnet website with 200k images in a /Images/ -folder. Many operations in Visual Studio is slow because it access the folder, adding a web service takes 10 minutes. The images is not checked into scm (svn). How should I structure the tree of code, to improve performance in VS? It would also be neat if not all developers needed to copy 200k images to their local disk to be able to develop on the site. Images as DB blobs is not an option.

    Read the article

  • arbitrary vire connection / search and replace

    - by fatai
    input :["vire_connection",[1, 2, [ 3, [ 4, "connect"]]], ["connect", [3 , 5] ] ] output:["vire_connection",[ 1, 2, [ 3, [ 4, [ 3, 5 ] ] ] ] ], [ [ 3 , 5] ] ] after connection ( simply copying [3,5] to other wanted position ) , remove connect word input :["vire_connection", [ [ [ ["connect", [ 3, 4 ] ] ] ] ], [ 2, "connect"]] output :["vire_connection",[[[[[3,4]]]]], [ 2, [ 3 , 4 ]]] after connection ( simply copying [3,4] to other wanted position ) , remove connect word how can I do ?

    Read the article

  • How do you calculate expanding mean on time series using pandas?

    - by mlo
    How would you create a column(s) in the below pandas DataFrame where the new columns are the expanding mean/median of 'val' for each 'Mod_ID_x'. Imagine this as if were time series data and 'ID' 1-2 was on Day 1 and 'ID' 3-4 was on Day 2. I have tried every way I could think of but just can't seem to get it right. left4 = pd.DataFrame({'ID': [1,2,3,4],'val': [10000, 25000, 20000, 40000],'Mod_ID': [15, 35, 15, 42], 'car': ['ford','honda', 'ford', 'lexus']}) right4 = pd.DataFrame({'ID': [3,1,2,4],'color': ['red', 'green', 'blue', 'grey'], 'wheel': ['4wheel','4wheel', '2wheel', '2wheel'], 'Mod_ID': [15, 15, 35, 42]}) df1 = pd.merge(left4, right4, on='ID').drop('Mod_ID_y', axis=1)

    Read the article

  • Transaction within transaction

    - by user281521
    Hello, I want to know if open a transaction inside another is safe and encouraged? I have a method: def foo(): session.begin try: stuffs except Exception, e: session.rollback() raise e session.commit() and a method that calls the first one, inside a transaction: def bar(): stuffs try: foo() #<<<< there it is :) stuffs except Exception, e: session.rollback() raise e session.commit() if I get and exception on the foo method, all the operations will be rolled back? and everything else will work just fine? thanks!!

    Read the article

  • Which style of return is "better" for a method that might return None?

    - by Daenyth
    I have a method that will either return an object or None if the lookup fails. Which style of the following is better? def get_foo(needle): haystack = object_dict() if needle not in haystack: return None return haystack[needle] or, def get_foo(needle): haystack = object_dict() try: return haystack[needle] except KeyError: # Needle not found return None I'm undecided as to which is more more desirable myself. Another choice would be return haystack[needle] if needle in haystack else None, but I'm not sure that's any better.

    Read the article

  • Django: automatically import MEDIA_URL in context

    - by pistacchio
    Hi, like exposed here, one can set a MEDIA_URL in settings.py (for example i'm pointing to Amazon S3) and serve the files in the view via {{ MEDIA_URL }}. Since MEDIA_URL is not automatically in the context, one have to manually add it to the context, so, for example, the following works: #views.py from django.shortcuts import render_to_response from django.template import RequestContext def test(request): return render_to_response('test.html', {}, context_instance=RequestContext(request)) This means that in each view.py file i have to add from django.template import RequestContext and in each response i have to explicitly specify context_instance=RequestContext(request). Is there a way to automatically (DRY) add MEDIA_URL to the default context? Thanks in advance.

    Read the article

  • 50 million+ Rows of Data - CSV or MySQL

    - by eWizardII
    Hello, I have a CSV file which is about 1GB big and contains about 50million rows of data, I am wondering is it better to keep it as a CSV file or store it as some form of a database. I don't know a great deal about MySQL to argue for why I should use it or another database framework over just keeping it as a CSV file. I am basically doing a Breadth-First Search with this dataset, so once I get the initial "seed" set the 50million I use this as the first values in my queue. Thanks,

    Read the article

  • Qt: How to autoexpand parents of a new QTreeView item when using a QSortFilterProxyModel

    - by taynaron
    I'm making an app wherein the user can add new data to a QTreeModel at any time. The parent under which it gets placed is automatically expanded to show the new item: self.tree = DiceModel(headers) self.treeView.setModel(self.tree) expand_node = self.tree.addRoll() #addRoll makes a node, adds it, and returns the (parent) note to be expanded self.treeView.expand(expand_node) This works as desired. If I add a QSortFilterProxyModel to the mix: self.tree = DiceModel(headers) self.sort = DiceSort(self.tree) self.treeView.setModel(self.sort) expand_node = self.tree.addRoll() #addRoll makes a node, adds it, and returns the (parent) note to be expanded self.treeView.expand(expand_node) the parent no longer expands. Any idea what I am doing wrong?

    Read the article

  • wxpython - Running threads sequentially without blocking GUI

    - by ryantmer
    I've got a GUI script with all my wxPython code in it, and a separate testSequences module that has a bunch of tasks that I run based on input from the GUI. The tasks take a long time to complete (from 20 seconds to 3 minutes), so I want to thread them, otherwise the GUI locks up while they're running. I also need them to run one after another, since they all use the same hardware. (My rationale behind threading is simply to prevent the GUI from locking up.) I'd like to have a "Running" message (with varying number of periods after it, i.e. "Running", "Running.", "Running..", etc.) so the user knows that progress is occurring, even though it isn't visible. I'd like this script to run the test sequences in separate threads, but sequentially, so that the second thread won't be created and run until the first is complete. Since this is kind of the opposite of the purpose of threads, I can't really find any information on how to do this... Any help would be greatly appreciated. Thanks in advance! gui.py import testSequences from threading import Thread #wxPython code for setting everything up here... for j in range(5): testThread = Thread(target=testSequences.test1) testThread.start() while testThread.isAlive(): #wait until the previous thread is complete time.sleep(0.5) i = (i+1) % 4 self.status.SetStatusText("Running"+'.'*i) testSequences.py import time def test1(): for i in range(10): print i time.sleep(1) (Obviously this isn't the actual test code, but the idea is the same.)

    Read the article

  • Return an object after parsing xml with SAX

    - by sentimental_turtle
    I have some large XML files to parse and have created an object class to contain my relevant data. Unfortunately, I am unsure how to return the object for later processing. Right now I pickle my data and moments later depickle the object for access. This seems wasteful, and there surely must be a way of grabbing my data without hitting the disk. def endElement(self, name): if name == "info": # done collecting this iteration self.data.setX(self.x) self.data.setY(self.y) elif name == "lastTagOfInterest": # done with file # want to return my object from here filehandler = open(self.outputname + ".pi", "w") pickle.dump(self.data, filehandler) filehandler.close() I have tried putting a return statement in my endElement tag, but that does not seem to get passed up the chain to where I call the SAX parser. Thanks for any tips.

    Read the article

< Previous Page | 594 595 596 597 598 599 600 601 602 603 604 605  | Next Page >