Search Results

Search found 61580 results on 2464 pages for 'document based database'.

Page 389/2464 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • can't save form content to database, help plsss!!

    - by dana
    i'm trying to save 100 caracters form user in a 'microblog' minimal application. my code seems to not have any mystakes, but doesn't work. the mistake is in views.py, i can't save the foreign key to user table models.py looks like this: class NewManager(models.Manager): def create_post(self, post, username): new = self.model(post=post, created_by=username) new.save() return new class New(models.Model): post = models.CharField(max_length=120) date = models.DateTimeField(auto_now_add=True) created_by = models.ForeignKey(User, blank=True) objects = NewManager() class NewForm(ModelForm): class Meta: model = New fields = ['post'] # widgets = {'post': Textarea(attrs={'cols': 80, 'rows': 20}) def save_new(request): if request.method == 'POST': created_by = User.objects.get(created_by = user) date = request.POST.get('date', '') post = request.POST.get('post', '') new_obj = New(post=post, date=date, created_by=created_by) new_obj.save() return HttpResponseRedirect('/') else: form = NewForm() return render_to_response('news/new_form.html', {'form': form},context_instance=RequestContext(request)) i didn't mention imports here - they're done right, anyway. my mistake is in views.py, when i try to save it says: local variable 'created_by' referenced before assignment it i put created_py as a parameter, the save needs more parameters... it is really weird help please!!

    Read the article

  • DB2 increase bufferpool size and compressed tables not equal better performance. Why?

    - by Mestika
    Hi, I’m working on tuning and increasing the performance of my IBM DB2 version 9.7 database. I’ve been searching around the net for the last couple of days and learned that if I created my tables in COMPRESS mode and created one more bufferpool and set both of them to access 1024mb, then the performance in my queries should increase because of the less I/Os to the disks. However, when I run my time analysis, the performance Decrease. I added the new additions to my regular database with the indexes I’ve used all the time. Each time I search google I come up with the statement that: Increased bufferpool size and several bufferpools AND a table compression SHOULD prove to get better performance. I’m very puzzled about the total unexpected result. Are there some tuning mechanisms I’ve forgot or does anyone have a explanation for this odd behavior? Sincerely Mestika

    Read the article

  • create jar file with images and database

    - by Samurai
    Hi I am using NetBeans IDE and I have my images (what I am using in my project) in a folder named Images. When I am building jar it doesn't take that images. The code I am using to set image is, buttonObj.setIcon(new ImageIcon("\Images\a.jpg") any help please.

    Read the article

  • Database Compression in Python

    - by user551832
    I have hourly logs like user1:joined user2:log out user1:added pic user1:added comment user3:joined I want to compress all the flat files down to one file. There are around 30 million users in the logs and I just want the latest user log for all the logs. My end result is I want to have a log look like user1:added comment user2:log out user3:joined Now my first attempt on a small scale was to just do a dict like log['user1'] = "added comment" Will doing a dict of 30 million key/val pairs have a giant memory footprint.. Or should I use something like sqllite to store them.. then just put the contents of the sqllite table back into a file?

    Read the article

  • Locking DB w/ Large Reads (Ruby-on-Rails/Heroku)

    - by Splashlin
    Currently I have a Web API running on Heroku that is constantly writing information we're collecting from other data sources (currently theres about half a GB of data and it's growing very quickly). We're looking to add a reporting system on top of the current database that we can use to extract useful information out of the DB. The problem is that when we're running reports we're locking the DB and any other sites communicating with the DB are timing out. Does anyone have any solutions on how to solve this type of issue? Amazon RDS seems to have some interesting stuff with database replication but I don't know if that will solve my problems. Any advice would be greatly appreciated. Thanks

    Read the article

  • Select past date from database x days from now

    - by Pr0no
    Consider the following table daterange _date trading_day ------------------------ 2011-08-01 1 2011-07-31 0 2011-07-30 0 2011-07-29 1 2011-07-28 1 2011-07-27 1 2011-07-26 1 2011-07-25 1 2011-07-24 0 2011-07-23 0 2011-07-22 1 2011-07-21 1 2011-07-20 1 2011-07-19 1 2011-07-18 1 2011-07-17 0 I'm in need of a query that returns a _date, x days before a given _date. When counting back, _days with trading_day = 0 should be ignored. A few examples: input | output -------------------------+------------ 1 day before 2011-07-19 | 2011-07-18 2 days before 2011-08-01 | 2011-07-28 (trading_day = 0 don't count) 3 days before 2011-07-29 | 2001-07-26 The first one is easy: SELECT _date FROM daterange WHERE trading_day = 0 AND _date < '2011-07-19' LIMIT 1 But I don't know how to query for the other examples. Do you?

    Read the article

  • What is the best way to sync multiple client SqlServers to one MS SqlServer 2005?

    - by user605055
    I have several client databases that use my windows application. I want to send this data for online web site. The client databases and server database structure is different because we need to add client ID column to some tables in server data base. I use this way to sync databases; use another application and use C# bulk copy with transaction to sync databases. My server database sql server is too busy and parallel task cannot be run. I work on this solution: I use triggers after update, delete, insert to save changes in one table and create sql query to send a web service to sync data. But I must send all data first! Huge data set (bigger than 16mg) I think can't use replication because the structure and primary keys are different.

    Read the article

  • How to concatinate text on existing database entry?

    - by Starx
    I am a table, whose structure is somewhat like this id, name, link the link holds the name of the page like "link" = "index.php". Now I want to update this field and add "page=" in front of "index.php". Using this method I would like to update every entry in my table. My desired SQL syntax need to be something like this UPDATE mytable set link= 'page=' + <existing value of link> WHERE 1; I am using Where 1; to denote every other rows Anyone know what to accomplish this?

    Read the article

  • Optimize MySQL database query

    - by rajeeesh
    I had a commenting application in my web site. The comments will store in a MySQL table . table structure as follows id | Comment | user | created_date ------------------------------------------------------ 12 | comment he | 1245 | 2012-03-30 12:15:00 ------------------------------------------------------ I need to run a query for listing all the comments after a specific time. ie .. a query like this SELECT * FROM comments WHERE created_date > "2012-03-29 12:15:00" ORDER BY created_date DESC Its working fine.. My question is if I got a 1-2 lakh entry in this table is this query is sufficient for the purpose ? or this query will take time to execute ? In most cases I have to show last 2 days data + periodically ( interval of 10 mins ) checking for updates with ajax from this table ... Please help Thanks

    Read the article

  • Should I use a huge composite primary key or just a unique id?

    - by Jack
    I have been trying to do web scraping of a particular site and storing the results in a database. My original assumptions about the data allowed a schema where I could use fairly reasonable composite primary keys (usually containing only 2 or 3 fields) but as time went on, I realized that my original assumptions about the data were wrong and my primary keys were not as unique as I thought they were, so I have slowly been expanding them to contain more and more fields. In fact, I have recently come to believe that their database has no constraints whatsoever. Just today, I have finally expanded my a primary key for one of my tables to contain every field in that table and I thought now would be a good time to ask: is it better to add an auto-incrementing column that is just a unique id or just leave a composite primary key on the entire table?

    Read the article

  • How to concatenate text on existing database entry?

    - by Starx
    I have a table, whose fields are id, name, link the link holds the name of the page like "link" = "index.php". Now I want to update this field and add "page=" in front of "index.php". Using this method I would like to update every entry in my table. My desired SQL syntax need to be something like this UPDATE mytable set link= 'page=' + <existing value of link> WHERE 1; I am using 'WHERE 1;' to denote every row. Anyone know how to accomplish this?

    Read the article

  • Codeignitor Manual Database Connection

    - by Ajith
    I am doing php in codeignitor framework.Always codeignitor support default persistent connections.I dont want to use that connection.I need to connect manually.Is it possible in codeignitor?If anybody know please help me to go forward.I need little bit explanation also please.

    Read the article

  • Array as struct database?

    - by user2985179
    I have a struct that reads data from the user: typedef struct { int seconds; } Time; typedef struct { Time time; double distance; } Training; Training input; scanf("%d %lf", input.time.seconds, input.distance); This scanf will be looped and the user can input different data every time, I want to store this data in an array for later use. I THINK I want something like arr[0].seconds and arr[0].distance. I tried to store the entered data in an array but it didn't really work at all... Training data[10]; data[10].seconds = input.time.seconds; data[10].distance = input.distance; The data will wipe when the program closes and that's how I like it to be. So I want it to be stored in an array, no files or databases!

    Read the article

  • Remove specific string from multiple database rows in SQL

    - by Scott
    I have a column that contains page titles, which has the website name appended to the end of each. (e.g. Product Name | Company Name Inc.) I would like to remove the " | Company Name Inc." from multiple rows simultaneously. What SQl query commands (or query itself) would allow me to accomplish this? To re-illustrate, I want to convert multiple rows of 1 column from this: Product Name | Company Name Inc. To this: Product Name

    Read the article

  • How can i bind a Database field value to a hidden field inside a gridview

    - by Dorababu
    I use the following to bind a field from the table to a hidden field inside a gridview but i am getting the error as System.Data.DataRowView' does not contain a property with the name 'AccountType'. This is how i assigned <asp:TemplateField> <ItemTemplate> <asp:HiddenField ID="hdnAccntType" runat="Server" Value='<%#Eval("AccountType") %>' /> </ItemTemplate> </asp:TemplateField> Is it correct or i have to make any corrections

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >