Search Results

Search found 14657 results on 587 pages for 'portable python'.

Page 371/587 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Speeding up templates in GAE-Py by aggregating RPC calls

    - by Sudhir Jonathan
    Here's my problem: class City(Model): name = StringProperty() class Author(Model): name = StringProperty() city = ReferenceProperty(City) class Post(Model): author = ReferenceProperty(Author) content = StringProperty() The code isn't important... its this django template: {% for post in posts %} <div>{{post.content}}</div> <div>by {{post.author.name}} from {{post.author.city.name}}</div> {% endfor %} Now lets say I get the first 100 posts using Post.all().fetch(limit=100), and pass this list to the template - what happens? It makes 200 more datastore gets - 100 to get each author, 100 to get each author's city. This is perfectly understandable, actually, since the post only has a reference to the author, and the author only has a reference to the city. The __get__ accessor on the post.author and author.city objects transparently do a get and pull the data back (See this question). Some ways around this are Use Post.author.get_value_for_datastore(post) to collect the author keys (see the link above), and then do a batch get to get them all - the trouble here is that we need to re-construct a template data object... something which needs extra code and maintenance for each model and handler. Write an accessor, say cached_author, that checks memcache for the author first and returns that - the problem here is that post.cached_author is going to be called 100 times, which could probably mean 100 memcache calls. Hold a static key to object map (and refresh it maybe once in five minutes) if the data doesn't have to be very up to date. The cached_author accessor can then just refer to this map. All these ideas need extra code and maintenance, and they're not very transparent. What if we could do @prefetch def render_template(path, data) template.render(path, data) Turns out we can... hooks and Guido's instrumentation module both prove it. If the @prefetch method wraps a template render by capturing which keys are requested we can (atleast to one level of depth) capture which keys are being requested, return mock objects, and do a batch get on them. This could be repeated for all depth levels, till no new keys are being requested. The final render could intercept the gets and return the objects from a map. This would change a total of 200 gets into 3, transparently and without any extra code. Not to mention greatly cut down the need for memcache and help in situations where memcache can't be used. Trouble is I don't know how to do it (yet). Before I start trying, has anyone else done this? Or does anyone want to help? Or do you see a massive flaw in the plan?

    Read the article

  • How do I find the "concrete class" of a django model baseclass

    - by Mr Shark
    I'm trying to find the actual class of a django-model object, when using model-inheritance. Some code to describe the problem: class Base(models.model): def basemethod(self): ... class Child_1(Base): pass class Child_2(Base): pass If I create various objects of the two Child classes and the create a queryset containing them all: Child_1().save() Child_2().save() (o1, o2) = Base.objects.all() I want to determine if the object is of type Child_1 or Child_2 in basemethod, I can get to the child object via o1.child_1 and o2.child_2 but that reconquers knowledge about the childclasses in the baseclass. I have come up with the following code: def concrete_instance(self): instance = None for subclass in self._meta.get_all_related_objects(): acc_name = subclass.get_accessor_name() try: instance = self.__getattribute__(acc_name) return instance except Exception, e: pass But it feels brittle and I'm not sure of what happens when if I inherit in more levels.

    Read the article

  • Euclidian Distances between points

    - by R S
    I have an array of points in numpy: points = rand(dim, n_points) And I want to: Calculate all the l2 norm (euclidian distance) between a certain point and all other points Calculate all pairwise distances. and preferably all numpy and no for's.

    Read the article

  • BeautifulSoup, but for CSS?

    - by MTsoul
    BeautifulSoup parses HTML and offers various ways to manipulate and search within HTML. Is there something similar for CSS? Specifically, I'd like to know if a given HTML text is rendered as bold. Either it has an ancestor that is the <strong> or the <bold> tag (which can be done with BeautifulSoup), or it has an ancestor (or itself) that has CSS attributes with font-weight: bold. Is this possible without resulting to writing my own library?

    Read the article

  • Can SQLAlchemy DateTime Objects Only Be Naive?

    - by Sean M
    I am working with SQLAlchemy, and I'm not yet sure which database I'll use under it, so I want to remain as DB-agnostic as possible. How can I store a timezone-aware datetime object in the DB without tying myself to a specific database? Right now, I'm making sure that times are UTC before I store them in the DB, and converting to localized at display-time, but that feels inelegant and brittle. Is there a DB-agnostic way to get a timezone-aware datetime out of SQLAlchemy instead of getting naive datatime objects out of the DB?

    Read the article

  • wxPython: Change a buttons text in a wx.FileDialog

    - by Sascha
    Hello I have a wx.FileDialog (with the wx.FD_OPEN flag) & I would like to know if I can (& how) I could change the button in the bottom right of the FileDialog from "Open" to "Create" or "Delete", etc. What I am doing is I have a button with the text "Delete Portfolio", when pressed it opens a FileDialog & allows the user to select a portfolio file(.db) to delete. So instead of the File Dialog's bottom right confirm button displaying "Open" I would like to be able to change the text to "Confirm" or "Delete" or whatever. Is this possible, its a rather superficial thing to do, but if the button says open when the user wants to select a file to delete, it can be a little confusing even if the title of the dialog says "please select a file to delete"

    Read the article

  • How do I add a trailing slash for Django MPTT-based categorization app?

    - by Patrick Beeson
    I'm using Django-MPTT to develop a categorization app for my Django project. But I can't seem to get the regex pattern for adding a trailing slash that doesn't also break on child categories. Here's an example URL: http://mydjangoapp.com/categories/parentcat/childcat/ I'd like to be able to use http://mydjangoapp.com/categories/parentcat and have it redirect to the trailing slash version. The same should apply to http://mydjangoapp.com/categories/parentcat/childcat (it should redirect to http://mydjangoapp.com/categories/parentcat/childcat/). Here's my urls.py: from django.conf.urls.defaults import patterns, include, url from django.views.decorators.cache import cache_page from storefront.categories.models import Category from storefront.categories.views import SimpleCategoryView urlpatterns = patterns('', url(r'^(?P<full_slug>[-\w/]+)', cache_page(SimpleCategoryView.as_view(), 60 * 15), name='category_view'), ) And here is my view: from django.core.exceptions import ImproperlyConfigured from django.core.urlresolvers import reverse from django.views.generic import TemplateView, DetailView from django.views.generic.detail import SingleObjectTemplateResponseMixin, SingleObjectMixin from django.utils.translation import ugettext as _ from django.contrib.syndication.views import Feed from storefront.categories.models import Category class SimpleCategoryView(TemplateView): def get_category(self): return Category.objects.get(full_slug=self.kwargs['full_slug']) def get_context_data(self, **kwargs): context = super(SimpleCategoryView, self).get_context_data(**kwargs) context["category"] = self.get_category() return context def get_template_names(self): if self.get_category().template_name: return [self.get_category().template_name] else: return ['categories/category_detail.html'] And finally, my model: from django.db import models from mptt.models import MPTTModel from mptt.fields import TreeForeignKey class CategoryManager(models.Manager): def get(self, **kwargs): defaults = {} defaults.update(kwargs) if 'full_slug' in defaults: if defaults['full_slug'] and defaults['full_slug'][-1] != "/": defaults['full_slug'] += "/" return super(CategoryManager, self).get(**defaults) class Category(MPTTModel): title = models.CharField(max_length=255) description = models.TextField(blank=True, help_text='Please use <a href="http://daringfireball.net/projects/markdown/syntax">Markdown syntax</a> for all text-formatting and links. No HTML is allowed.') slug = models.SlugField(help_text='Prepopulates from title field.') full_slug = models.CharField(max_length=255, blank=True) template_name = models.CharField(max_length=70, blank=True, help_text="Example: 'categories/category_parent.html'. If this isn't provided, the system will use 'categories/category_detail.html'. Use 'categories/category_parent.html' for all parent categories and 'categories/category_child.html' for all child categories.") parent = TreeForeignKey('self', null=True, blank=True, related_name='children') objects = CategoryManager() class Meta: verbose_name = 'category' verbose_name_plural = 'categories' def save(self, *args, **kwargs): orig_full_slug = self.full_slug if self.parent: self.full_slug = "%s%s/" % (self.parent.full_slug, self.slug) else: self.full_slug = "%s/" % self.slug obj = super(Category, self).save(*args, **kwargs) if orig_full_slug != self.full_slug: for child in self.get_children(): child.save() return obj def available_product_set(self): """ Returns available, prioritized products for a category """ from storefront.apparel.models import Product return self.product_set.filter(is_available=True).order_by('-priority') def __unicode__(self): return "%s (%s)" % (self.title, self.full_slug) def get_absolute_url(self): return '/categories/%s' % (self.full_slug)

    Read the article

  • Error -3 while decompressing data: incorrect header check

    - by Rahul99
    I have .zip file which contain csv data. I am reading .zip file using <input type = "file" name = "select_file"/> I want to decompress that .zip file and read csv data. file_data = self.request.get('select_file') file_str = zlib.decompress(file_data) #file_data_list = file_str.split('\n') #file_Reader = csv.reader(file_data_list,quoting=csv.QUOTE_NONE ) I am expecting csv data in file_str but I am getting error. error :: Error -3 while decompressing data: incorrect header check What I have to use?

    Read the article

  • How do I get javascript results using selenium?

    - by Seth
    I have the following code: from selenium import selenium selenium = selenium("localhost", 4444, "*chrome", "http://some_site.com/") selenium.start() sel = selenium sel.open("/") sel.type("ctl00_ContentPlaceHolder1_SuburbTownTextBox", "Adelaide,SA,5000") sel.click("ctl00_ContentPlaceHolder1_SearchImageButton") #text = sel.get_body_text() text = sel.get_html_source() print(text) The click executes a javascript file which then produces results on the same page. Obviously print(text) will only print the orignal html source. How do I get to the results of the javascript?

    Read the article

  • Django database caching

    - by hekevintran
    I have a Django form that uses an integer field to lookup a model object by its primary key. The form has a save() method that uses the model object referred to by the integer field. The model's manager's get() method is called twice, once in the clean method and once in the save() method: class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is the first call to get MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): id_a = self.cleaned_data['id_a'] # here is the second call to get my_model_object = MyModel.objects.get(id=id_a) # do other stuff I wasn't sure whether this hits the database two times or one time so I returned the object itself in the clean method so that I could avoid a second get() call. Does calling get() hit the database two times? Or is the object cached in the thread? class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is my workaround return MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): # looking up the cleaned value returns the model object my_model_object = self.cleaned_data['id_a'] # do other stuff

    Read the article

  • Combinatorial optimisation of a distance metric

    - by Jose
    I have a set of trajectories, made up of points along the trajectory, and with the coordinates associated with each point. I store these in a 3d array ( trajectory, point, param). I want to find the set of r trajectories that have the maximum accumulated distance between the possible pairwise combinations of these trajectories. My first attempt, which I think is working looks like this: max_dist = 0 for h in itertools.combinations ( xrange(num_traj), r): for (m,l) in itertools.combinations (h, 2): accum = 0. for ( i, j ) in itertools.izip ( range(k), range(k) ): A = [ (my_mat[m, i, z] - my_mat[l, j, z])**2 \ for z in xrange(k) ] A = numpy.array( numpy.sqrt (A) ).sum() accum += A if max_dist < accum: selected_trajectories = h This takes forever, as num_traj can be around 500-1000, and r can be around 5-20. k is arbitrary, but can typically be up to 50. Trying to be super-clever, I have put everything into two nested list comprehensions, making heavy use of itertools: chunk = [[ numpy.sqrt((my_mat[m, i, :] - my_mat[l, j, :])**2).sum() \ for ((m,l),i,j) in \ itertools.product ( itertools.combinations(h,2), range(k), range(k)) ]\ for h in itertools.combinations(range(num_traj), r) ] Apart from being quite unreadable (!!!), it is also taking a long time. Can anyone suggest any ways to improve on this?

    Read the article

  • Django Grouping Query

    - by Matt
    I have the following (simplified) models: class Donation(models.Model): entry_date = models.DateTimeField() class Category(models.Model): name = models.CharField() class Item(models.Model): donation = models.ForeignKey(Donation) category = models.ForeignKey(Category) I'm trying to display the total number of items, per category, grouped by the donation year. I've tried this: Donation.objects.extra(select={'year': "django_date_trunc('year', %s.entry_date)" % Donation._meta.db_table}).values('year', 'item__category__name').annotate(items=Sum('item__quantity')) But I get a Field Error on item__category__name. I've also tried: Item.objects.extra(select={"year": "django_date_trunc('year', entry_date)"}, tables=["donations_donation"]).values("year", "category__name").annotate(items=Sum("quantity")).order_by() Which generally gets me what I want, but the item quantity count is multiplied by the number of donation records. Any ideas? Basically I want to display this: 2010 - Category 1: 10 items - Category 2: 17 items 2009 - Category 1: 5 items - Category 3: 8 items

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

  • Loading url with cyrillic symbols

    - by Ockonal
    Hi guys, I have to load some url with cyrillic symbols. My script should work with this: http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/ If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url? When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code: requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent}) requestHandler = pageHandler.open(requestData) pageData = requestHandler.read().decode('utf-8') soupHandler = BeautifulSoup(pageData) topicLinks = [] for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')): topicLinks.append(postBlock['href']) postAddress = choice(topicLinks) postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent}) postHandler = pageHandler.open(postRequestData) postData = postHandler.read() File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

    Read the article

  • Why is the destructor called when the CPython garbage collector is disabled?

    - by Frederik
    I'm trying to understand the internals of the CPython garbage collector, specifically when the destructor is called. So far, the behavior is intuitive, but the following case trips me up: Disable the GC. Create an object, then remove a reference to it. The object is destroyed and the __del__ method is called. I thought this would only happen if the garbage collector was enabled. Can someone explain why this happens? Is there a way to defer calling the destructor? import gc import unittest _destroyed = False class MyClass(object): def __del__(self): global _destroyed _destroyed = True class GarbageCollectionTest(unittest.TestCase): def testExplicitGarbageCollection(self): gc.disable() ref = MyClass() ref = None # The next test fails. # The object is automatically destroyed even with the collector turned off. self.assertFalse(_destroyed) gc.collect() self.assertTrue(_destroyed) if __name__=='__main__': unittest.main() Disclaimer: this code is not meant for production -- I've already noted that this is very implementation-specific and does not work on Jython.

    Read the article

  • How can I fix the scroll bug when using Windows rich edit controls in wxpython?

    - by ChrisD
    When using wx.TextCtl with the wx.TE_RICH2 option in windows, I get this strange bug with the auto-scroll when using the AppendText function. It scrolls so that all the text is above the visible area, which isn't very useful behaviour. I tried just adding a call to ScrollLines(-1) after appending the text - which does scroll it to the correct position - but this can lead to the window flashing when it auto-scrolls. So I'm looking for another way to automatically scroll to the bottom. So far, my solution is to bypass the AppendText functions auto-scroll and implement my own, like this: def append_text(textctrl, text): before_number_of_lines = textctrl.GetNumberOfLines() textctrl.SetInsertionPointEnd() textctrl.WriteText(text) after_number_of_lines = textctrl.GetNumberOfLines() textctrl.ScrollLines(before_number_of_lines - after_number_of_lines + 1) Is there a better way?

    Read the article

  • How do I code this relationship in SQLAlchemy?

    - by Martin Del Vecchio
    I am new to SQLAlchemy (and SQL, for that matter). I can't figure out how to code the idea I have in my head. I am creating a database of performance-test results. A test run consists of a test type and a number (this is class TestRun below) A test suite consists the version string of the software being tested, and one or more TestRun objects (this is class TestSuite below). A test version consists of all test suites with the given version name. Here is my code, as simple as I can make it: from sqlalchemy import * from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import relationship, backref, sessionmaker Base = declarative_base() class TestVersion (Base): __tablename__ = 'versions' id = Column (Integer, primary_key=True) version_name = Column (String) def __init__ (self, version_name): self.version_name = version_name class TestRun (Base): __tablename__ = 'runs' id = Column (Integer, primary_key=True) suite_directory = Column (String, ForeignKey ('suites.directory')) suite = relationship ('TestSuite', backref=backref ('runs', order_by=id)) test_type = Column (String) rate = Column (Integer) def __init__ (self, test_type, rate): self.test_type = test_type self.rate = rate class TestSuite (Base): __tablename__ = 'suites' directory = Column (String, primary_key=True) version_id = Column (Integer, ForeignKey ('versions.id')) version_ref = relationship ('TestVersion', backref=backref ('suites', order_by=directory)) version_name = Column (String) def __init__ (self, directory, version_name): self.directory = directory self.version_name = version_name # Create a v1.0 suite suite1 = TestSuite ('dir1', 'v1.0') suite1.runs.append (TestRun ('test1', 100)) suite1.runs.append (TestRun ('test2', 200)) # Create a another v1.0 suite suite2 = TestSuite ('dir2', 'v1.0') suite2.runs.append (TestRun ('test1', 101)) suite2.runs.append (TestRun ('test2', 201)) # Create another suite suite3 = TestSuite ('dir3', 'v2.0') suite3.runs.append (TestRun ('test1', 102)) suite3.runs.append (TestRun ('test2', 202)) # Create the in-memory database engine = create_engine ('sqlite://') Session = sessionmaker (bind=engine) session = Session() Base.metadata.create_all (engine) # Add the suites in version1 = TestVersion (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = TestVersion (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = TestVersion (suite3.version_name) version3.suites.append (suite3) session.add (suite3) session.commit() # Query the suites for suite in session.query (TestSuite).order_by (TestSuite.directory): print "\nSuite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) # Query the versions for version in session.query (TestVersion).order_by (TestVersion.version_name): print "\nVersion %s has %d test suites:" % (version.version_name, len (version.suites)) for suite in version.suites: print " Suite directory %s, version %s has %d test runs:" % (suite.directory, suite.version_name, len (suite.runs)) for run in suite.runs: print " Test '%s', result %d" % (run.test_type, run.rate) The output of this program: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 1 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Version v1.0 has 1 test suites: Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This is not correct, since there are two TestVersion objects with the name 'v1.0'. I hacked my way around this by adding a private list of TestVersion objects, and a function to find a matching one: versions = [] def find_or_create_version (version_name): # Find existing for version in versions: if version.version_name == version_name: return (version) # Create new version = TestVersion (version_name) versions.append (version) return (version) Then I modified my code that adds the records to use it: # Add the suites in version1 = find_or_create_version (suite1.version_name) version1.suites.append (suite1) session.add (suite1) version2 = find_or_create_version (suite2.version_name) version2.suites.append (suite2) session.add (suite2) version3 = find_or_create_version (suite3.version_name) version3.suites.append (suite3) session.add (suite3) Now the output is what I want: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 Version v1.0 has 2 test suites: Suite directory dir1, version v1.0 has 2 test runs: Test 'test1', result 100 Test 'test2', result 200 Suite directory dir2, version v1.0 has 2 test runs: Test 'test1', result 101 Test 'test2', result 201 Version v2.0 has 1 test suites: Suite directory dir3, version v2.0 has 2 test runs: Test 'test1', result 102 Test 'test2', result 202 This feels wrong to me; it doesn't feel right that I am manually keeping track of the unique version names, and manually adding the suites to the appropriate TestVersion objects. Is this code even close to being correct? And what happens when I'm not building the entire database from scratch, as in this example. If the database already exists, do I have to query the database's TestVersion table to discover the unique version names? Thanks in advance. I know this is a lot of code to wade through, and I appreciate the help.

    Read the article

  • Google App Engine appcfg.py data_upload Authentication fail

    - by Pradeep Upadhyay
    Hi, I am using appcfg.py to upload data to datastore from a csv file. But every time I try, I am getting error: [info ] Authentication failed even if i am using Admin id and password. In my app.yaml file I am having: handlers: - url: /remote_api script: $PYTHON_LIB/google/appengine/ext/remote_api/handler.py login: admin - url: .* script: MainHandler.py Can anybody please help me? Thanks in advance.

    Read the article

  • Avoiding nesting two for loops

    - by chavanak
    Hi, Please have a look at the code below: import string from collections import defaultdict first_complex=open( "residue_a_chain_a_b_backup.txt", "r" ) first_complex_lines=first_complex.readlines() first_complex_lines=map( string.strip, first_complex_lines ) first_complex.close() second_complex=open( "residue_a_chain_a_c_backup.txt", "r" ) second_complex_lines=second_complex.readlines() second_complex_lines=map( string.strip, second_complex_lines ) second_complex.close() list_1=[] list_2=[] for x in first_complex_lines: if x[0]!="d": list_1.append( x ) for y in second_complex_lines: if y[0]!="d": list_2.append( y ) j=0 list_3=[] list_4=[] for a in list_1: pass for b in list_2: pass if a==b: list_3.append( a ) kvmap=defaultdict( int ) for k in list_3: kvmap[k]+=1 print kvmap Normally I use izip or izip_longest to club two for loops, but this time the length of the files are different. I don't want a None entry. If I use the above method, the run time becomes incremental and useless. How am I supposed to get the two for loops going? Cheers, Chavanak

    Read the article

  • Editting ForeignKey from "child" table

    - by profuel
    I'm programming on py with django. I have models: class Product(mymodels.Base): title = models.CharField() price = models.ForeignKey(Price) promoPrice = models.ForeignKey(Price, related_name="promo_price") class Price(mymodels.Base): value = models.DecimalField(max_digits=10, decimal_places=3) taxValue = models.DecimalField("Tax Value", max_digits=10, decimal_places=3) valueWithTax = models.DecimalField("Value with Tax", max_digits=10, decimal_places=3) I want to see INPUTs for both prices when editing product, but cannot find any possibility to do that. inlines = [...] works only from Price to Product, which is stupid in this case. Thanx for adnvance.

    Read the article

  • Serve external template in Django

    - by AlexeyMK
    Hey, I want to do something like return render_to_response("http://docs.google.com/View?id=bla", args) and serve an external page with django arguments. Django doesn't like this (it looks for templates in very particular places). What's the easiest way make this work? Right now I'm thinking to use urllib to save the page to somewhere locally on my server and then serve with the templates pointing to there. Note: I'm not looking for anything particularly scalable here, I realize my proposal above is a little dirty.

    Read the article

  • Django: Get remote IP address inside settings.py

    - by Silver Light
    Hello! I want to enable debug (DEBUG = True) For my Django project only if it runs on localhost. How can I get user IP address inside settings.py? I would like something like this to work: #Debugging only on localhost if user_ip = '127.0.0.1': DEBUG = True else: DEBUG = False How do I put user IP address in user_ip variable inside settings.py file?

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >