Search Results

Search found 4496 results on 180 pages for 'django uploads'.

Page 88/180 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • How to manage feeds with subclassed object in Django 1.2?

    - by Matteo
    Hi, I'm trying to generate a feed rss from a model like this one, selecting all the Entry objects: from django.db import models from django.contrib.sites.models import Site from django.contrib.auth.models import User from imagekit.models import ImageModel import datetime class Entry(ImageModel): date_pub = models.DateTimeField(default=datetime.datetime.now) author = models.ForeignKey(User) via = models.URLField(blank=True) comments_allowed = models.BooleanField(default=True) icon = models.ImageField(upload_to='icon/',blank=True) class IKOptions: spec_module = 'journal.icon_specs' cache_dir = 'icon/resized' image_field = 'icon' class Post(Entry): title = models.CharField(max_length=200) description = models.TextField() slug = models.SlugField(unique=True) def __unicode__(self): return self.title class Photo(Entry): alt = models.CharField(max_length=200) description = models.TextField(blank=True) original = models.ImageField(upload_to='photo/') class IKOptions: spec_module = 'journal.photo_specs' cache_dir = 'photo/resized' image_field = 'original' def __unicode__(self): return self.alt class Quote(Entry): blockquote = models.TextField() cite = models.TextField(blank=True) def __unicode__(self): return self.blockquote When I use the render_to_response in my views I simply call: def get_journal_entries(request): entries = Entry.objects.all().order_by('-date_pub') return render_to_response('journal/entries.html', {'entries':entries}) And then I use a conditional template to render the right snippets of html: {% extends "base.html" %} {% block main %} <hr> {% for entry in entries %} {% if entry.post %}[...]{% endif %}[...] But I cannot do the same with the Feed Framework in django 1.2... Any suggestion, please?

    Read the article

  • How can I get sessions to work if I'm using Google App Engine + Django 1.1?

    - by user341642
    Is there a way for me to get sessions working? I know Django has built in session management, and GAE has some tools for it if you're using their watered down version of Django 0.96, but is there a way to get sessions to work if you're trying to use GAE w/ Django 1.1 (i.e. use_library() call). I assume using a db-backed session doesn't work, and a file system backed one won't work b/c we don't have access to the filesystem if we deploy to the Google production servers. This kinda worked (as in didn't crap out) when I used SessionMiddleware backed by a local-memory backed cache and a non-persistent cache (i.e. setting SESSION_ENGINE to django.contrib.sessions.backends.cache). But the session never seems to persist in this case, no matter how I set the timeouts. A new session key is generated on every page reload. Maybe this is b/c the GAE assumes complete statelessness with each request and blows away my local cache? Apologies in advance, I'm pretty new to Python. Any suggestions would be greatly appreciated.

    Read the article

  • In practice, what are the key differences between Heroku and webfaction? [closed]

    - by jdotjdot
    I've been building and hosting webapps, mainly in Django and Flask, for some time now. Mainly, I've been hosting them on Heroku, because of the free tier and the ease of git-enabled application updating. I have seen that a lot of Django users prefer Webfaction. I looked through their offerings, and they seem to me like a standard web hosting service. Questions: Why might be webfaction considered a good hosting service for Django apps? If Heroku is generally called a "Platform-as-a-Service," what does that make Webfaction? Does it have any important similiarities/distinctions from Heroku that I might somehow be missing?

    Read the article

  • PHP - file uploads and ways to prevent viruses from being uploaded in zip/rar archives

    - by Joe
    I am trying to provide a service on my website to allow users to upload files so others can download them. The issue is, since some of these files I will allow to upload will be .zip/.rar files, I am curious as to what ideas exist to help prevent the uploading of archives with Viruses/trojans etc. included. Some .zip files will include legitimate .exe files,though I am not sure what options I have. I thought about it and I don't have a method for verifying with a virus scanner on the server, since I am on shared hosting w/o the option to run a service like that... nor do I have the knowledge on how to do that. I am also aware there is no php class or database to scan the files for viruses. This means, my only options are to rely on: a). manual approval <-- not an acceptable option for me as it might become a busy site with thousands of uploads b). get the users to somehow point out it if has viruses through voting or "flagging", etc.... anyway, regarding "b" - what ideas would you suggest?

    Read the article

  • How hard is it to modify the Django Models?

    - by alex
    I am doing geolocation, and Django does not have a PointField. So, I am forced to writing in RAW SQL. GeoDjango, the Django library, does not support the following query for MYSQL databases (can someone verify that for me?) cursor.execute("SELECT id FROM l_tag WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm),asbinary(PointFromWKB(point(%s, %s)))))) < %s + accuracy + %s)\ I don't nkow why GeoDjango library cannot do this in MYSQL database. I hate writing RAW SQL for calculating distances between two points. Is there a way I can create my own library for Django that can handle this? If so, how hard is it?

    Read the article

  • After extending the User model in django, how do you create a ModelForm?

    - by mlissner
    I extended the User model in django to include several other variables, such as location, and employer. Now I'm trying to create a form that has the following fields: First name (from User) Last name (from User) Location (from UserProfile, which extends User via a foreign key) Employer (also from UserProfile) I have created a modelform: from django.forms import ModelForm from django.contrib import auth from alert.userHandling.models import UserProfile class ProfileForm(ModelForm): class Meta: # model = auth.models.User # this gives me the User fields model = UserProfile # this gives me the UserProfile fields So, my question is, how can I create a ModelForm that has access to all of the fields, whether they are from the User model or the UserProfile model? Hope this makes sense. I'll be happy to clarify if there are any questions.

    Read the article

  • How to create an exception folder in a django site?

    - by ninja123
    There are a few folders where I house my django site that I want to be rendered as it would on any other non-django site. Namely, forum (vbulletin) and cpanel. I currently run the site with fastcgi. My .htaccess looks like this: AddHandler application/x-httpd-php5 .htm AddHandler application/x-httpd-php5 .html AddHandler fastcgi-script .fcgi Options +FollowSymLinks RewriteEngine On RewriteBase / AddHandler application/x-httpd-php5 .htm RewriteCond %{REQUEST_URI} !(mysite.fcgi) RewriteRule ^(.*)$ mysite.fcgi/$1 [QSA,L] What are lines I can add so www.mysite.com/forum can not be picked up by django url and be rendered as it would do normally. Thanks.

    Read the article

  • How do I configure the Python logging module in Django?

    - by mipadi
    I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file: import logging import logging.handlers import os date_fmt = '%m/%d/%Y %H:%M:%S' log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt) log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app") log_name = os.path.join(log_dir, "nyrb.log") bytes = 1024 * 1024 # 1 MB if not os.path.exists(log_dir): os.makedirs(log_dir) handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7) handler.setFormatter(log_formatter) handler.setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().addHandler(handler) logging.getLogger(__name__).info("Initialized logging subsystem") At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?

    Read the article

  • django-lfs upload Image doesn't work on some environment ?

    - by vernomcrp
    yesterday, after I complete setup django-lfs without buildout. Happlily create categories and products but while I upload image to product after I push upload button its stay always 'pendings'. I use fedora django==1.1.2,PIL==1.1.7. but its work on osx. Now I try on Ubuntu9.10 with completely PIL==1.1.7 and Django==1.1.2 and its won't work. Anyone hav some good solution for this ? (i may think of flash version because upload part looklike its come from flash)

    Read the article

  • Does Django tests run slower on the mac compared to linux?

    - by Thierry Lam
    I'm currently developing my Django projects on both: Mac OS X 10.5, 32 bit Ubuntu Server 9.10 64 bits (1 CPU, 512MB RAM) Both of the above OS are using: Python 2.6.4 Django 1.1.1 MySQL 5.1 Running 12 tests for one of my application take: Mac: 57.513s Linux: 30.935s EDIT: Mac Hardware Spec: MacBook Pro 2.2 GHz Intel Core 2 Duo 3GB RAM I'm running the Ubuntu OS on the same mac above through VMware Fusion 2.0.6. You might argue that Ubuntu Server 64 bits is faster but I have observed a similar speed difference on Ubuntu 8.10 32 bits desktop edition. Even if I turn off my linux VM and other mac applications, I still experience the slowness. Has anyone else experienced this Django test speed difference across those two OS?

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Does Python Django support custom SQL and denormalized databases with no Foreign Key relationships?

    - by Jay
    I've just started learning Python Django and have a lot of experience building high traffic websites using PHP and MySQL. What worries me so far is Python's overly optimistic approach that you will never need to write custom SQL and that it automatically creates all these Foreign Key relationships in your database. The one thing I've learned in the last few years of building Chess.com is that its impossible to NOT write custom SQL when you're dealing with something like MySQL that frequently needs to be told what indexes it should use (or avoid), and that Foreign Keys are a death sentence. Percona's strongest recommendation was for us to remove all FKs for optimal performance. Is there a way in Django to do this in the models file? create relationships without creating actual DB FKs? Or is there a way to start at the database level, design/create my database, and then have Django reverse engineer the models file?

    Read the article

  • Server-side access to Client Browser's Latitude/Longitude using Django.

    - by ZenGyro
    Hello, So i am writing a little app that compares a user's position against a database on web-based server written using Django and performs some functions with it. Accessing the browser's geolocation data (in supported browsers ) is fairly trivial using JavaScript. But what is the best way to allow the Django server to access the longitude and latitude variables? Is it best to wrap them up as a JSON object and send to the server via POST? Or is there some easier (Geo)Django-based way to access the Navigator.geolocation browser object. Please forgive a newbie a question like this, but my Google-Fuing only seems to find ways to insert variables into JavaScript via template tag, whereas I need it to work the other way! Any advice or code snippets greatly appreciated. Feel free to talk to me like I am an idiot.

    Read the article

  • Python: How can I override one module in a package with a modified version that lives outside the pa

    - by zlovelady
    I would like to update one module in a python package with my own version of the module, with the following conditions: I want my updated module to live outside of the original package (either because I don't have access to the package source, or because I want to keep my local modifications in a separate repo, etc). I want import statements that refer to original package/module to resolve to my local module Here's an example of what I'd like to do using specifics from django, because that's where this problem has arisen for me: Say this is my project structure django/ ... the original, unadulterated django package ... local_django/ conf/ settings.py myproject/ __init__.py myapp/ myfile.py And then in myfile.py # These imports should fetch modules from the original django package from django import models from django.core.urlresolvers import reverse # I would like this following import statement to grab a custom version of settings # that I define in local_django/conf/settings.py from django.conf import settings def foo(): return settings.some_setting Can I do some magic with the __import__ statement in myproject/__init__.py to accomplish this? Is there a more "pythonic" way to achieve this?

    Read the article

  • How to use external static files with Django (serving external files once again)?

    - by Tomas Novotny
    Hi, even after Googling and reading all relevant posts at StackOverflow, I still can't get static files working in my Django application. Here is how my files look: settings.py MEDIA_ROOT = os.path.join(SITE_ROOT, 'static') MEDIA_URL = '/static/' urs.py from DjangoBandCreatorSite.settings import DEBUG if DEBUG: urlpatterns += patterns('', ( r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': 'static'} )) template: <script type="text/javascript" src="/static/jquery.js"></script> <script type="text/javascript"> I am trying to use jquery.js stored in directory "static". I am using: Windows XP Python 2.6.4 Django 1.2.3 Thank you very much for any help

    Read the article

  • Django 0.0.0.0:80; can't access remotely

    - by user349555
    Hello, I'm trying to access my Django server from another computer on the same network. I've set up my server and can view everything correctly usingpython manage.py runserver and going to http://127.0.0.1:8000 but when I try to use python manage.py runserver 0.0.0.0:80, I can't view my Django page from another computer. The computer hosting the Django server has intranet IP 192.168.1.146. On my secondary computer, I fire up a browser and try to access http://192.168.1.146:80 to no avail. I've also forwarded port 80 (and I've tried 8000 as well) also to no avail :(. HELP!

    Read the article

  • Django install on a shared host, .htaccess help

    - by redconservatory
    I am trying to install Django on a shared host using the following instructions: docs.google.com/View?docid=dhhpr5xs_463522g My problem is with the following line on my root .htaccess: RewriteRule ^(.*)$ /cgi-bin/wcgi.py/$1 [QSA,L] When I include this line I get a 500 error with almost all of my domains on this account. My cgi-bin directory is home/my-username/public_html/cgi-bin/ The wcgi.py file contains: #!/usr/local/bin/python import os, sys sys.path.insert(0, "/home/username/django/") sys.path.insert(0, "/home/username/django/projects") sys.path.insert(0, "/home/username/django/projects/newprojects") import django.core.handlers.wsgi os.chdir("/home/username/django/projects/newproject") # optional os.environ['DJANGO_SETTINGS_MODULE'] = "newproject.settings" def runcgi(): environ = dict(os.environ.items()) environ['wsgi.input'] = sys.stdin environ['wsgi.errors'] = sys.stderr environ['wsgi.version'] = (1,0) environ['wsgi.multithread'] = False environ['wsgi.multiprocess'] = True environ['wsgi.run_once'] = True application = django.core.handlers.wsgi.WSGIHandler() if environ.get('HTTPS','off') in ('on','1'): environ['wsgi.url_scheme'] = 'https' else: environ['wsgi.url_scheme'] = 'http' headers_set = [] headers_sent = [] def write(data): if not headers_set: raise AssertionError("write() before start_response()") elif not headers_sent: # Before the first output, send the stored headers status, response_headers = headers_sent[:] = headers_set sys.stdout.write('Status: %s\r\n' % status) for header in response_headers: sys.stdout.write('%s: %s\r\n' % header) sys.stdout.write('\r\n') sys.stdout.write(data) sys.stdout.flush() def start_response(status,response_headers,exc_info=None): if exc_info: try: if headers_sent: # Re-raise original exception if headers sent raise exc_info[0], exc_info[1], exc_info[2] finally: exc_info = None # avoid dangling circular ref elif headers_set: raise AssertionError("Headers already set!") headers_set[:] = [status,response_headers] return write result = application(environ, start_response) try: for data in result: if data: # don't send headers until body appears write(data) if not headers_sent: write('') # send headers now if body was empty finally: if hasattr(result,'close'): result.close() runcgi() Only I changed the "username" to my username...

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • NoMethodError Rails multiple file uploads

    - by Danny McClelland
    Hi Everyone, I am working on getting multiple file uploads working for an model in my application, I have included the code below: delivers_controller.rb # POST /delivers def create @deliver = Deliver.new(params[:deliver]) process_file_uploads(@deliver) if @deliver.save flash[:notice] = 'Task was successfully created.' redirect_to(@deliver) else render :action => "new" end end protected def process_file_uploads(deliver) i = 0 while params[:attachment]['file_'+i.to_s] != "" && !params[:attachment]['file_'+i.to_s].nil? deliver.assets.build(:data => params[:attachment]['file_'+i.to_s]) i += 1 end end deliver.rb has_many :assets, :as => :attachable, :dependent => :destroy validate :validate_attachments Max_Attachments = 5 Max_Attachment_Size = 5.megabyte def validate_attachments errors.add_to_base("Too many attachments - maximum is #{Max_Attachments}") if assets.length > Max_Attachments assets.each {|a| errors.add_to_base("#{a.name} is over #{Max_Attachment_Size/1.megabyte}MB") if a.file_size > Max_Attachment_Size} end assets_controller.rb class AssetsController < ApplicationController def show asset = Asset.find(params[:id]) # do security check here send_file asset.data.path, :type => asset.data_content_type end def destroy asset = Asset.find(params[:id]) @asset_id = asset.id.to_s @allowed = Deliver::Max_Attachments - asset.attachable.assets.count asset.destroy end end asset.rb class Asset < ActiveRecord::Base has_attached_file :data, belongs_to :attachable, :polymorphic => true def url(*args) data.url(*args) end def name data_file_name end def content_type data_content_type end def file_size data_file_size end end Whenever I create a new deliver item and try to attach any files I get the following error: NoMethodError in DeliversController#create You have a nil object when you didn't expect it! You might have expected an instance of ActiveRecord::Base. The error occurred while evaluating nil.[] /Users/danny/Dropbox/SVN/railsapps/macandco/surveymanager/trunk/app/controllers/delivers_controller.rb:60:in `process_file_uploads' /Users/danny/Dropbox/SVN/railsapps/macandco/surveymanager/trunk/app/controllers/delivers_controller.rb:46:in `create' new.html.erb (Deliver view) <% content_for :header do -%> Deliver Repositories <% end -%> <% form_for(@deliver, :html => { :multipart => true }) do |f| %> <%= f.error_messages %> <p> <%= f.label :caseref %><br /> <%= f.text_field :caseref %> </p> <p> <%= f.label :casesubject %><br /> <%= f.text_area :casesubject %> </p> <p> <%= f.label :description %><br /> <%= f.text_area :description %> </p> <p>Pending Attachments: (Max of <%= Deliver::Max_Attachments %> each under <%= Deliver::Max_Attachment_Size/1.megabyte%>MB) <% if @deliver.assets.count >= Deliver::Max_Attachments %> <input id="newfile_data" type="file" disabled /> <% else %> <input id="newfile_data" type="file" /> <% end %> <div id="attachment_list"><ul id="pending_files"></ul></div> </p> <p> <%= f.submit 'Create' %> </p> <% end %> <%= link_to 'Back', delivers_path %> Show.html.erb (Delivers view) <% content_for :header do -%> Deliver Repositories <% end -%> <p> <b>Title:</b> <%=h @deliver.caseref %> </p> <p> <b>Body:</b> <%=h @deliver.casesubject %> </p> <p><b>Attached Files:</b><div id="attachment_list"><%= render :partial => "attachment", :collection => @deliver.assets %></div></p> <%= link_to 'Edit', edit_deliver_path(@deliver) %> | <%= link_to 'Back', deliver_path %> <%- if logged_in? %> <%= link_to 'Edit', edit_deliver_path(@deliver) %> | <%= link_to 'Back', delivers_path %> <% end %> _attachment.html.erb (Delivers view) <% if !attachment.id.nil? %><li id='attachment_<%=attachment.id %>'><a href='<%=attachment.url %>'><%=attachment.name %></a> (<%=attachment.file_size/1.kilobyte %>KB) <%= link_to_remote "Remove", :url => asset_path(:id => attachment), :method => :delete, :html => { :title => "Remove this attachment", :id => "remove" } %></li> <% end %> I have been banging my head against the wall with the error all day, if anyone can shed some light on it, I would be eternally grateful! Thanks, Danny

    Read the article

  • ModelMultipleChoiceField and reverse()

    - by celopes
    I have a form containing a ModelMultipleChoiceField. Is it possible to come up with a url mapping that will capture a varying number of parameters from said ModelMultipleChoiceField? I find myself doing a reverse() call in the view passing the arguments of the form submission and realized that I don't know how to represent, in the urlconf, the multiple values from the SELECT tag rendered for the ModelMultipleChoiceField...

    Read the article

  • Error URL redirection

    - by xRobot
    urls.py: url(r'^book/(?P<booktitle>[\w\._-]+)/(?P<bookeditor>[\w\._-]+)/(?P<bookpages>[\w\._-]+)/(?P<bookid>[\d\._-]+)/$', 'book.views.book', name="book"), views.py: def book(request, booktitle, bookeditor, bookpages, bookid, template_name="book.html"): book = get_object_or_404(book, pk=bookid) if booktitle != book.book_title : redirect_to = "/book/%s/%s/%s/%s/%i/" % ( booktitle, bookeditor, bookpages, bookid, ) return HttpResponseRedirect(redirect_to) return render_to_response(template_name, { 'book': book, },) . So the urls of each book are like this: example.com/book/the-bible/gesu-crist/938/12/ I want that if there is an error in the url, then I get redirected to the real url by using book.id in the end of the url. For example if I go to: example.com/book/A-bible/gesu-crist/938/12/ the I will get redirected to: example.com/book/the-bible/gesu-crist/938/12/ but I go to wrong url I will get this error: TypeError at /book/A-bible/gesu-crist/938/12/ %d format: a number is required, not unicode . Why ? What I have to do ?

    Read the article

  • What causes the Openid error: Received "invalidate_handle" from server

    - by BryanWheelock
    I'm new to openid, and I am getting an "invalidate_handle" and I have no idea what to do to fix it. I'm using django_authopenid [Thu Apr 29 14:13:28 2010] [error] Generated checkid_setup request to https://www.google.com/accounts/o8/ud with assocication AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Error attempting to use stored discovery information: <openid.consumer.consumer.TypeURIMismatch: Required type http://specs.openid.net/auth/2.0/signon not found in ['http://specs.openid.net/auth/2.0/server', 'http://openid.net/srv/ax/1.0', 'http://specs.openid.net/extensions/ui/1.0/mode/popup', 'http://specs.openid.net/extensions/ui/1.0/icon', 'http://specs.openid.net/extensions/pape/1.0'] for endpoint <openid.consumer.discover.OpenIDServiceEndpoint server_url='https://www.google.com/accounts/o8/ud' claimed_id=None local_id=None canonicalID=None used_yadis=True >> [Thu Apr 29 14:13:29 2010] [error] Attempting discovery to verify endpoint [Thu Apr 29 14:13:29 2010] [error] Performing discovery on https://www.google.com/accounts/o8/id?id=AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Received id_res response from https://www.google.com/accounts/o8/ud using association AOxxxxxxxxOX5-V9oDc3-btHhFxzAcccccccccc2RTHgh [Thu Apr 29 14:13:29 2010] [error] Using OpenID check_authentication [Thu Apr 29 14:13:29 2010] [error] op_endpoint [Thu Apr 29 14:13:29 2010] [error] claimed_id [Thu Apr 29 14:13:29 2010] [error] identity [Thu Apr 29 14:13:29 2010] [error] return_to [Thu Apr 29 14:13:29 2010] [error] response_nonce [Thu Apr 29 14:13:29 2010] [error] assoc_handle [Thu Apr 29 14:13:29 2010] [error] Received "invalidate_handle" from server https://www.google.com/accounts/o8/ud

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >