Search Results

Search found 6630 results on 266 pages for 'cname record'.

Page 188/266 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • about the post_save signal and created argument

    - by panchicore
    the docs says: post_save django.db.models.signals.post_save created A boolean; True if a -new- record was create. and I have this: from django.db.models.signals import post_save def handle_new_user(sender, instance, created, **kwargs): print "--------> save() "+str(created) post_save.connect(handle_new_user, sender=User) when I do in shell: u = User(username="cat") u.save() >>> --------> save() True u.username = "dog" u.save() >>> --------> save() True I expect a -------- save() False when I save() the second time because is an update? not?

    Read the article

  • What information do you capture your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • ea: import requirements from csv file

    - by bolekprez
    During import requirements from csv file I have a message: Bad object type when creating new record of type '' File I was trying to import: GUID$Name$Notes$Scope {BF467CF6-FF97-4dd4-894C-3F09E713678C}$NameOfReq$description$Public {71B26F9A-5418-499e-B635-F2DB158D3FF1}$Requirement1$$Public {0}$Requir1$blah$Public First 2 (+header) lines becomes from existing requirements and there is no problem with import. Last line should create a new object of requirement in enterprise architect, but there is a message mentioned above. Any solution? How should proper file to create (import from csv file) a new requirements look like?

    Read the article

  • Weblogic Error - Method not supported : Statement.cancel

    - by manetic
    I am running an application on Weblogic 9.2 MP3, currently having problem with connection pool. ERROR - UserBean retrieving user record. weblogic.jdbc.extensions.PoolLimitSQLException: weblogic.common.resourcepool.ResourceLimitException: No resources currently available in pool MyApp Data Source to allocate to applications, please increase the size of the pool and retry.. I also kept getting below error message, saying "Method not supported : Statement.cancel()" which I think it is the cause to the error above. <Error> <JDBC> <BEA-001131> <Received an exception when closing a cached statement for the pool "MyApp Data Source": java.sql.SQLException: Method not supported : Statement.cancel()..> I went through the app source code, this method didn't seem to be used by the app at all. Just though it might be something to do with weblogic itself. Anyone have any idea to fix this error?

    Read the article

  • Container for database-like searches

    - by Milan Babuškov
    I'm looking for some STL, boost, or similar container to use the same way indexes are used in databases to search for record using a query like this: select * from table1 where field1 starting with 'X'; or select * from table1 where field1 like 'X%'; I thought about using std::map, but I cannot because I need to search for fields that "start with" some text, and not those that are "equal to". I could create a sorted vector or list and use binary search (breaking the set in 2 in each step by reading the element in the middle and seeing if it's more or less than 'X'), but I wonder if there is some ready-made container I could use without reinventing the wheel?

    Read the article

  • How to 'insert if not exists' in MySQL?

    - by warren
    I started by googling, and found this article which talks about mutex tables. I have a table with ~14 million records. If I want to add more data in the same format, is there a way to ensure the record I want to insert does not already exist without using a pair of queries (ie, one query to check and one to insert is the result set is empty)? Does a unique constraint on a field guarantee the insert will fail if it's already there? It seems that with merely a constraint, when I issue the insert via php, the script croaks.

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • How do I find the user that has both a cat and a dog?

    - by brad
    I want to do a search across 2 tables that have a many-to-one relationship, eg class User << ActiveRecord::Base has_many :pets end class Pet << ActiveRecord::Base belongs_to :users end Now let's say I have some data like so users id name 1 Bob 2 Joe 3 Brian pets id user_id animal 1 1 cat 2 1 dog 3 2 cat 4 3 dog What I want to do is create an active record query that will return a user that has both a cat and a dog (i.e. user 1 - Bob). My attempt at this so far is User.joins(:pets).where('pets.animal = ? AND pets.animal = ?','dog','cat') Now I understand why this doesn't work - it's looking for a pet that is both a dog and a cat so returns nothing. I don't know how to modify this to give me the answer I want however. Does anyone have any suggestions? This seems like it should be easy - it doesn't seem like an especially unusual situation.

    Read the article

  • Normalise this Table?

    - by Abs
    Hello all, I am creating a social bookmarking app. I am having a re-thought of the DB design in the middle of development. Should I normalise the bookmarks table and remove the tag columns that I have into a separate table. I have 10 tags per bookmark and therefore 10 columns per record (per bookmark). It seems to me that breaking the table into two would just mean I would have to do a join but the way I currently have it, its a straight select - but the table doesn't feel right...? Thanks all

    Read the article

  • Extending ClaimsIdentity in MVC3

    - by Steoates
    I've got my claims set-up with MVC3 using azure and everything is going well. What I need to do now is extend the Claims Identity that's in the current thread / http context and add my own information (DOB, Address.. that sort of stuff) so my question is - where is the best place to do this? any examples would be great.. I presume that when the user is authenticated id then have to go to the DB and pull back the relevant record for the user then add it to the custom Claims Identity object? cheers. ste.

    Read the article

  • ActiveRecordStore InvalidAuthenticityToken

    - by Andy
    I have recently been using cookie store and I want to transition to active record store. However I keep getting an invalid authenticity token. After deleting my cookies, I was able to access the page just fine, but I don't want all my users to come to my page, get a huge error and then figure out that I want them to delete their cookies. So I made a function called delete cookies: after_filter :delete_cookie def delete_cookie puts "deleting cookies" cookies.to_hash.each_pair do |k, v| puts k cookies.delete(k) end end In application controller, but it doesn't seem to be working correctly. I still see my cookie after visiting any page. I feel like there really should be a better solution but I can't seem to find any so far. Any hints?

    Read the article

  • What information do you capture when your software crashes in the field?

    - by Russ
    I am working on rewriting my unexpected error handling process, and I would like to ask the community: What information do you capture both automatic, and manually, when software you have written crashes? Right now, I capture a few items, some of which are: Automatic: Name of app that crashed Version of app that crashed Stack trace Operating System version RAM used by the application Number of processors Screen shot: (Only on non-public applications) User name and contact information (from Active Directory) Manual: What context is the user in (i.e.: what company, tech support call number, RA number, etc...) When did the user expect to happen? (Typical response: "Not to crash”) Steps to reproduce. What other bits of information do you capture that helps you discover the true cause of an applications problem, especially given that most users simply mash the keyboard when asked to tell you what happened. For the record I’m using C#, WPF and .NET version 4, but I don’t necessarily want to limit myself to those. Related: http://stackoverflow.com/questions/1226671/what-to-collect-information-when-software-crashes Related: http://stackoverflow.com/questions/701596/what-should-be-included-in-the-state-of-the-art-error-and-exception-handling-stra

    Read the article

  • Update through Linq-to-SQL DataContext not working.

    - by Puneet Dudeja
    I have created a small test for updating table using Linq-to-SQL DataContext as follows: using (pessimistic_exampleDataContext db = new pessimistic_exampleDataContext()) { var query = db.test1s.First(t => t.a == 3); if (query.b == "a") query.b = "b"; else query.b = "a"; db.SubmitChanges(); } But after executing this code in a console application in Main method, when I select records from the table, the record is not updated. I have debugged through the code, and it is not even throwing any exception. What can be the problem ?

    Read the article

  • Database design: How should I add an information which can apply to several tables

    - by Stefan
    I am constructing a database System using Mysql, this will be an application of about 20 tables. The system contains information on farmers, we work with organic certification and need to record a lot of info for that. In my system, there are related parent-child tables for farmers, producing years and fields/areas - it's a simple representation of the real world in which farmers farm crops on their fields. I now need to add several status flags for each one of these levels: a farmer can be certified, or his field can be, or the specific year can be; each of these flags has several states and can occur a number of times. The obvious solution to this would be to add a child table to every one of these tables, and define the states there. What I wonder if there is an easier way to do this to avoid getting to many tables? Where/how would be best practise to keep that data?

    Read the article

  • php search and replace

    - by Dave
    I am trying to create a database field merge into a document (rtf) using php i.e if I have a document that starts Dear Sir, Customer Name: [customer_name], Date of order: [order_date] After retrieving the appropriate database record I can use a simple search and replace to insert the database field into the right place. So far so good. I would however like to have a little more control over the data before it is replaced. For example I may wish to Title Case it, or convert a delimited string into a list with carriage returns. I would therefore like to be able to add extra formatting commands to the field to be replaced. e.g. Dear Sir, Customer Name: [customer_name, TC], Date of order: [order_date, Y/M/D] There may be more than one formatting command per field. Is there a way that I can now search for these strings? The format of the strings is not set in stone, so if I have to change the format then I can. Any suggestions appreciated.

    Read the article

  • Does SQL Server 2005 error message numbers back to the asp.net application?

    - by Duke
    I'd like to get the message number and severity level information from SQL Server upon execution of an erroneous query. For example, when a user attempts to delete a row being referenced by another record, and the cascade relationship is "no action", I'd like the application to be able to check for error message 547 ("The DELETE statement conflicted with the REFERENCE constraint...") and return a user friendly and localized message to the user. When running such a query directly on SQL Server, the following message is printed: Msg 547, Level 16, State 0, Line 1 <Error message...> In an Asp.Net app is this information available in an event handler parameter or elsewhere? Also, I don't suppose anyone knows where I can find a definitive reference of SQL Server message numbers?

    Read the article

  • How can the last command's wall time be put in the Bash prompt?

    - by Mr Fooz
    Is there a way to embed the last command's elapsed wall time in a Bash prompt? I'm hoping for something that would look like this: [last: 0s][/my/dir]$ sleep 10 [last: 10s][/my/dir]$ Background I often run long data-crunching jobs and it's useful to know how long they've taken so I can estimate how long it will take for future jobs. For very regular tasks, I go ahead and record this information rigorously using appropriate logging techniques. For less-formal tasks, I'll just prepend the command with time. It would be nice to automatically "time" every single interactive command and have the timing information printed in a few characters rather than 3 lines.

    Read the article

  • How to stream a WAV file?

    - by jonasb
    I'm writing an app where I record audio and upload the audio file over the web. In order to speed up the upload I want to start uploading before I've finished recording. The file I'm creating is a WAV file. My plan was to use multiple data chunks. So instead of the normal encoding (RIFF, fmt , data) I’m using (RIFF, fmt , data, data, ..., data). The first issue is that the RIFF header wants the total length of the whole file, but that is of course not known when streaming the audio (I’m now using an arbitrary number). The other problem is that I'm not sure if it's valid since Audacity doesn't recognise the file, and Windows Media Player opens the file but plays only a very small part. I've been reading WAV specs but haven’t found an answer. Any suggestions?

    Read the article

  • Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

    - by Jaap
    I'm using table with a counter to ensure unique id's on a child element. I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys). I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter). I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem). The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps). My solution was: UPDATE bucket SET counter = counter + 1 WHERE id = param_id RETURNING counter INTO num_forprikey; PL/SQL returns var_num_forprikey so the item record can be created. Question: Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?

    Read the article

  • how to update database using datagridview?

    - by Sikret Miseon
    how do we update tables using datagridview? assuming the datagridview is editable at runtime? any kind of help is appreciated. Dim con As New OleDb.OleDbConnection Dim dbProvider As String Dim dbSource As String Dim ds As New DataSet Dim da As OleDb.OleDbDataAdapter Dim sql As String Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load dbProvider = "PROVIDER=Microsoft.ACE.OLEDB.12.0;" dbSource = "Data Source = C:\record.accdb" con.ConnectionString = dbProvider & dbSource con.Open() sql = "SELECT * FROM StudentRecords" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "AddressBook") MsgBox("Database is now open") con.Close() MsgBox("Database is now Closed") DataGridView1.DataSource = ds DataGridView1.DataMember = "AddressBook" End Sub

    Read the article

  • how to optimize an oracle query that has to_char in where clause for date

    - by panorama12
    I have a table that contains about 49403459 records. I want to query the table on a date range. say 04/10/2010 to 04/10/2010. However, the dates are stored in the table as format 10-APR-10 10.15.06.000000 AM (time stamp). As a result. When I do: SELECT bunch,of,stuff,create_date FROM myTable WHERE TO_CHAR (create_date,'MM/DD/YYYY)' >= '04/10/2010' AND TO_CHAR (create_date, 'MM/DD/YYYY' <= '04/10/2010' I get 529 rows but in 255.59 seconds! which is because I guess I am doing to_char on EACH record. However, When I do SELECT bunch,of,stuff,create_date FROM myTable WHERE create_date >= to_date('04/10/2010','MM/DD/YYYY') AND create_date <= to_date('04/10/2010','MM/DD/YYYY') then I get 0 results in 0.14 seconds. How can I make this query fast and still get valid (529) results?? At this point I can not change indexes. Right now I think index is created on create_date column

    Read the article

  • Helping managers and customers understand SOA

    - by David
    I frequently hear Service-Oriented Architecture (SOA) being tossed around as a buzzword among non-technical customers or program managers with little concern or understanding for what it actually entails (example: "Can I buy a SOA?"). There's also a lot of misinformation about SOA (example: "Only web apps can use SOA") and a general lack of understanding for its capabilities (example: "SOA can make your make all of your data work together"). What are some key facts that you, as someone who understand the technical side of SOA, use to educate program managers on the appropriate use and understanding of SOA? What's the best way to set the record straight with non-technical folks?

    Read the article

  • mysql LAST_INSERT_ID() used with multiple records INSERT statement

    - by bogdan
    Hello, If i insert multiple records with a loop that executes a single record insert, the last insert id returned is, as expected, the last one... but if i do a multiple records insert statement: INSERT INTO people (name,age) VALUES('William',25),('Bart',15),('Mary',12); let's say the three above are the first records inserted in the table...after the insert statement i expected last insert id to return 3, but it returned 1...the first insert id for the statement in question... So can someone please confirm if this is the normal behavior of LAST_INSERT_ID() in the context of multiple records INSERT statements...so i can base my code on it thanks :)

    Read the article

  • rails large amount of data in single insert activerecord gave out

    - by Nik
    So I have I think around 36,000 just to be safe, a number I wouldn't think was too large for a modern sql database like mysql. Each record has just two attributes. So I do: so I collected them into one single insert statement sql = "INSERT INTO tasks (attrib_a, attrib_b) VALUES (c1,d1),(c2,d2),(c3,d3)...(c36000,d36000);" ActiveRecord::Base.connection.execute sql from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:in `log' from C:/Ruby/lib/ruby/gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in `execute_without_analyzer from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from C:/Ruby/lib/ruby/1.8/benchmark.rb:308:in `realtime' from c:/r/projects/vendor/plugins/rails-footnotes/lib/rails-footnotes/notes/queries_note.rb:130:in `execute' from (irb):53 from C:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/vendor/tzinfo-0.3.12/tzinfo/time_or_datetime.rb:242 I don't know if the above info is enough, please do ask for anything that I didn't provide here. So any idea what this is about? THANK YOU!!!!

    Read the article

  • insert a date in mysql database

    - by kawtousse
    I use a jquery datepicker then i read it in my servlet like that: String dateimput=request.getParameter("datepicker");//1 then parse it like that: System.out.println("datepicker:" +dateimput); DateFormat df = new SimpleDateFormat("MM/dd/yyyy"); java.util.Date dt = null; try { dt = df.parse(dateimput); System.out.println("date imput parssé1 est:" +dt); System.out.println("date imput parsée2 est:" +df.format(dt)); } catch (ParseException e) { e.printStackTrace(); } and insert query like that: String query = "Insert into dailytimesheet(trackingDate,activity,projectCode) values ("+df.format(dt)+", \""+activity+"\" ,\""+projet+"\")"; it pass successfully untill now but if i check the record inserted i found the date: 01/01/0001 00:00:00 l've tried to fix it but it still a mess for me.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >