my form is
class MapForm(forms.ModelForm):
class Meta:
model = Map
fields = ('mapName', 'kmlStr')
and the view is :
map_form = MapForm(request.POST or None)
if map_form.is_valid():
map = map_form.save(commit=False)
map.mapName=map_form.mapName#is thie code right ?
how to get the mapName 's value , us 'map_form.mapName' ?
thanks
MYMESSAGE = "<div>Hello</div><p></p>Hello"
send_mail("testing",MYMESSAGE,"[email protected]",['[email protected]'],fail_silently=False)
However, this message doesn't get the HTML mime type when it is sent. In my outlook, I see the code...
For formatting a date using date filter you must use the following format :
{{ my_date|date:"Y-m-d" }}
If you use strftime from the standard datetime, you have to use the following :
my_date.strftime("%Y-%m-%d")
So my question is ... isn't it ugly (I guess it is because of the % that is used also for tags, and therefore is escaped or something) ?
But that's not the main question ... I would like to use the same DATE_FORMAT parametrized in settings.py all over the project, but it therefore seems that I cannot ! Is there a work around (for example a filter that removes the % after the date has been formatted like {{ my_date|date|dream_filter }}, because if I just use DATE_FORMAT = "%Y-%m-%d" I got something like %2001-%6-%12)?
Suppose I have my models set up already.
class books(models.Model):
title = models.CharField...
ISBN = models.Integer...
What if I want to add this column to my table?
user = models.ForeignKey(User, unique=True)
How would I write the raw SQL in my database so that this column works?
Hi,
I have an application to count the number of access to an object for each website in a same database.
class SimpleHit(models.Model):
"""
Hit is the hit counter of a given object
"""
content_type = models.ForeignKey(ContentType)
object_id = models.PositiveIntegerField()
content_object = generic.GenericForeignKey('content_type', 'object_id')
site = models.ForeignKey(Site)
hits_total = models.PositiveIntegerField(default=0, blank=True)
[...]
class SimpleHitManager(models.Manager):
def get_query_set(self):
print self.model._meta.fields
qset = super(SimpleHitManager, self).get_query_set()
qset = qset.filter(hits__site=settings.SITE_ID)
return qset
class SimpleHitBase(models.Model):
hits = generic.GenericRelation(SimpleHit)
objects = SimpleHitManager()
_hits = None
def _db_get_hits(self, only=None):
if self._hits == None:
try:
self._hits = self.hits.get(site=settings.SITE_ID)
except SimpleHit.DoesNotExist:
self._hits = SimpleHit()
return self._hits
@property
def hits_total(self):
return self._db_get_hits().hits_total
[...]
class Meta:
abstract = True
And I have a model like:
class Model(SimpleHitBase):
name = models.CharField(max_length=255)
url = models.CharField(max_length=255)
rss = models.CharField(max_length=255)
creation = AutoNowAddDateTimeField()
update = AutoNowDateTimeField()
So, my problem is this one: when I call Model.objects.all(), I would like to have one request for the SQL (not two). In this case: one for Model in order to have information and one for the hits in order to have the counter (hits_total). This is because I cannot call directly hits.hits_total (due to SITE_ID?). I have tried select_related, but it seems to do not work...
Question:
- How can I add column automatically like (SELECT hits.hits_total, model.* FROM [...]) to the queryset?
- Or use a functional select_related with my models?
I want this model could be plugable on all other existing model.
Thank you,
Best regards.
Suppose this is my URL route:
(r'^test/?$','hello.life.views.test'),
How do I make it so that people can do .json, .xml, and it would pass a variable to my views.test, so that I know to make json or xml?
I am trying to set up Google App Engine unit testing for my web application. I downloaded the file from here.
I followed the instructions in the readmen by copying the directory gaeunit into the directory with the rest of my apps and registering 'gaeunit' in settings.py. This didn't seem sufficient to actually get things going. I also stuck url('^test(.*)', include('gaeunit.urls')) into my urls.py file.
When I go to the url http://localhost:8000/test, I get the following error:
[Errno 2] No such file or directory: '../../gaeunit/test'
Any suggestions? I'm not sure what I've done wrong. Thanks!
I have this error:
'people' is an invalid keyword argument for this function
class Passage(models.Model):
name= models.CharField(max_length = 255)
who = models.ForeignKey(UserProfil)
class UserPassage(models.Model):
passage = models.ForeignKey(Passage)
people = models.ManyToManyField(UserProfil, null=True)
class UserProfil(models.Model):
user = models.OneToOneField(User)
name = models.CharField(max_length=50)
I try:
def join(request):
user = request.user
user_profil = UserProfil.objects.get(user=user)
passage = Passage.objects.get(id=2)
#line with error
up = UserPassage.objects.create(people= user_profil, passage=passage)
return render_to_response('thanks.html')
How to do correctly?
Thanks!
Hey guys I have a query that currently finds the latest comment for each of a user's topics and then orders topics by that comment's timestamp. What I want to do is expand on this query's use and print the latest comment for each topic. The problem with this query is that while it orders the topics correctly, it prints seemingly random comments for each topic. I am trying to implement a sub query but I am not quite sure how to approach it. I was thinking that I just had to somehow use this query to get the comments. If anyone has any ideas I would really appreciate it.
Here is what I think I need to add
SELECT * FROM comments where topic_id='$topic_id' ORDER BY timestamp DESC LIMIT 1
Here is the query I need to modify
SELECT topic.topic_title, topic.content_type, topic.subject_id, topic.creator, topic.description, topic.topic_id,comments.message,comments.user
FROM comments
JOIN topic ON topic.topic_id = comments.topic_id
WHERE topic.creator = '$user' AND comments.timestamp > $week
GROUP BY topic_id ORDER BY MAX(comments.timestamp) DESC
I am trying to get a list of all existing model fields and properties for a given object. Is there a clean way to instrospect an object so that I can get a dict of fields and properties.
class MyModel(Model)
url = models.TextField()
def _get_location(self):
return "%s/jobs/%d"%(url, self.id)
location = property(_get_location)
What I want is something that returns a dict that looks like this:
{
'id' : 1,
'url':'http://foo',
'location' : 'http://foo/jobs/1'
}
I can use model._meta.fields to get the model fields, but this doesn't give me things that are properties but not real DB fields.
I'd like everything to function correctly, except when it's mobile, the entire site will used a set of specific templates.
Also, I'd like to autodetect if it's mobile. If so, then use that set of templates throughout the entire site.
I have a handful of users on a server. After updating the site, they don't see the new pages. Is there a way to globally force their browsers and providers to display the new page? Maybe from settings.py? I see there are decorators that look like they do this on a function level.
Recompilation process is same as compilation and degrades server performance. In SQL Server 2000 and earlier versions, this was a serious issue but in SQL server 2005, the severity of this issue has been significantly reduced by introducing a new feature called Statement-level recompilation. When SQL Server 2005 recompiles stored procedures, only the statement that [...]
This blog post is in the response of the T-SQL Tuesday #004: IO by Mike Walsh. The subject of this month is IO. Here is my quick blog post on how Cover Index can Improve Performance by Reducing IO.
Let us kick off this post with disclaimers about Index. Index is a very complex subject and [...]
Recently I wrote about SQL SERVER – INSERT TOP (N) INTO Table – Using Top with INSERT I mentioned about how TOP works with INSERT. I have mentioned that I will write about the performance in next article. Here is the performance comparison of the two options.
It is clear that performance of the Insert TOP (N) [...]
I have recently been informed that Microsoft will be
guaranteeing the performance of SQL Server. Yes thats right Microsoft will
guarantee that you will get better performance out of SQL Server that any other
competitor system.
However on the flip side there are also saying that end
users also have to guarantee the performance of SQL Server if they want to use
the next release of SQL Server targeted for 2011 or 2012.
It appears that a recent recruit Mark Smith from
Newcastle, England will be heading a new team that will be making sure you are
running SQL Server on adequate hardware and making sure you are developing your
applications according to best practices. The Performance Enforcement Team
(SQLPET) will be a global group headed by mark that will oversee two other
groups the existing Customer Advisory Team (SQLCAT) and another new team the
Design and Operation Group (SQLDOG).
Mark informed me that the team was originally thought out
during Yukon and was going to be an independent body that went round to
customers making sure they didn’t suffer performance problems. However it was
felt that they needed to wait a few releases until SQL Server was really there.
The original Yukon Independent Performance Enhancement Team (YIPET) has now
become the SQL Performance Enforcement Team (SQLPET). When challenged about the
change from enhancement to enforcement Mark was unwilling to comment. An
anonymous source suggested that "..Microsoft is sick of the bad press SQL Server
gets for performance when the performance problems are normally down to people
developing applications badly and using inadequate hardware..."
Its true that it is very easy to install and run SQL,
unlike other RDMS systems and the flip side is that its also easy to get into
performance problems due to under specified hardware and bad design.
Its not yet confirmed if this enforcement will apply to
all SKUs or just the high end ones. I would personally welcome some level of
architectural and hardware advice service that clients would be able to turn to,
in order to justify getting the appropriate hardware at the start of a project
and not 1 year in when its often too late.
We will go over how to optimize Stored Procedure with making simple changes in the code. Please note there are many more other tips, which we will cover in future articles.
Include SET NOCOUNT ON statement: With every SELECT and DML statement, the SQL server returns a message that indicates the number of affected rows [...]
SQL Server 2008 R2 offers an impressive array of capabilities for developers that build upon key innovations introduced in SQL Server 2008. The SQL Server 2008 R2 Update for Developers Training Kit is ideal for developers who want to understand how to take advantage of the key improvements introduced in SQL [...]
The white paper describes load strategies for achieving high-speed data modifications of a Microsoft SQL Server database.
“Bulk Load Methods” and “Other Minimally Logged and Metadata Operations” provide an overview of two key and interrelated concepts for high-speed data loading: bulk loading and metadata operations.
After this background knowledge, white paper describe how these methods can be [...]