Search Results

Search found 29702 results on 1189 pages for 'select insert'.

Page 245/1189 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • how to implement ajax in Ruby on rails via jquery

    - by swaroopsm
    how do i pass few of my form field(s) values to a controller usnig ajax/jquery? For eg.: In php/jquery I do something like this: $("#test-btn".click(function(){ var name=$("#name").val(); var age=$("#age").val(); $.post("insert.php",{name: name,age: age}, function(data){ $("#respone").html(data).hide().fadeIn(500); }); }); //insert.php <?php //insert values to database! ?> how do i acheive a similar functionality in rails using ajax/jquery?

    Read the article

  • Why I can't use template table in dynamic query SQL SERVER 2005

    - by StuffHappens
    Hello! I have the following t-sql code which generates an error Declare @table TABLE ( ID1 int, ID2 int ) INSERT INTO @table values(1, 1); INSERT INTO @table values(2, 2); INSERT INTO @table values(3, 3); DECLARE @field varchar(50); SET @field = 'ID1' DECLARE @query varchar(MAX); SET @query = 'SELECT * FROM @table WHERE ' + @field + ' = 1' EXEC (@query) The error is Must declare the table variable "@table". What's wrong with the query. How to fix it?

    Read the article

  • SQL Temp Tables & Replication

    - by Refracted Paladin
    I have had an issue with our replication process and would like to salvage some data. I have a process in place where I will connect to each subscriber before flagging them for reinitialization and I will run the below to pull any data they may have entered in during the "dark time". I am pretty sure this will work in a vanilla palace. What I am unsure of is whether the Global Temporary Table will persist through DB Replication. To be clear, I am not trying to Replicate the TempTable, I just want to make sure it will still exist at the local DB after the Replication so I may run the INSERT from it. Thoughts? USE MemberCenteredPlan -- Select Data from tblPLan SELECT * INTO ##MyPlan FROM tblPlan WHERE PlanID = 407869 --------------------------- -- Run Replication Process --------------------------- -- Insert Plan back into DB INSERT INTO tblPlan SELECT * FROM ##MyPlan WHERE PlanID = 407869 -- Drop Global Temp Table DROP TABLE ##MyPlan --------------------------- -- Run Replication Process ---------------------------

    Read the article

  • MySQL ON DUPLICATE KEY UPDATE issue

    - by user644347
    Hi could some one look at this and tell me where I am going wrong. I have an SQL statement that when I echo using php I get this to screen INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '18' , 'GenreName' = 'Drama' ON DUPLICATE KEY UPDATE 'GenreName' = 'Drama' WHERE 'GenreID' = '18' INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '16' , 'GenreName' = 'Animation' ON DUPLICATE KEY UPDATE 'GenreName' = 'Animation' WHERE 'GenreID' = '16' And here is the statement $sql="INSERT INTO 'moviedb'.'genre' SET 'GenreID' = '{$genresID[$i]}' , 'GenreName' = '{$genreName[$i]}' ON DUPLICATE KEY UPDATE 'GenreName' = '{$genreName[$i]}' WHERE 'GenreID' = '{$genresID[$i]}'"; This is the error I recieve: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''moviedb'.'genre' SET 'GenreID' = '18' , 'GenreName' = 'Drama' ON DUPLICATE KEY ' at line 1 Any help would be greatly appreciated, thanks in advance.

    Read the article

  • Hibernate Auto-Increment not working

    - by dharga
    I have a column in my DB that is set with Identity(1,1) and I can't get hibernate annotations to work for it. I get errors when I try to create a new record. In my entity I have the following. @GeneratedValue(strategy=GenerationType.IDENTITY, generator="native") @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; But when I try to add a new record I get the following error. Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. I don't want to set IDENTIY_INSERT to ON because I want the identity column in the db to manage the values. The SQL that is run is the following; where you can clearly see the insert. insert into dbo.MemberSelectedOptions (OptionStatusCd, EffectiveDate, TermDate, SelectionStatusDate, SysLstUpdtUserId, SysLstTrxDtm, SourceApplication, GroupId, MemberId, OptionId, SeqNo) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) What am I missing?

    Read the article

  • Can you modify SQL DB schema in a transaction to know if all changes were applied?

    - by Chris F
    As part of my (new) database version control methodology, I'm writing a "change script" and want the change script to insert a new row into the SchemaChangeLog table if the script is executed successfully, or to reverse changes if any single change in the script fails. Is it possible to do schema changes in a transaction and only if it gets committed to then do the INSERT? For example (psuedo-code, I'm not too good with SQL): SET XACT_ABORT ON BEGIN TRANSACTION PRINT 'Add Col2 to Table1' IF NOT EXIST (SELECT * FROM sys.columns WHERE NAME='Col2' AND object_id=OBJECT_ID('Table1')) BEGIN ALTER TABLE [dbo].[Table1] ADD Col2 int NULL END -- maybe COMMIT here? INSERT INTO SchemaChangeLog VALUES(...) COMMIT TRANSACTION

    Read the article

  • How to get regex split from other var

    - by Dean
    Hi, $dbLink = mysql_connect('localhost', 'root', 't'); mysql_select_db('pc_spec', $dbLink); $html = file_get_contents('http://localhost/pc_spec/scrape.php?q=amd+955'); echo $html; $html = strip_tags($html); $price = ereg("\$.{6}", $html); $html = mysql_real_escape_string($html); $price = mysql_real_escape_string($price); $insert = mysql_query("INSERT INTO parts(part, price) values('$html','$price')") or var_dump(mysql_error()); How can I get $price to match $.{6} and insert this value (eg. $111.11) into a database and remove it from $html? Do I need to use explode? Thanks

    Read the article

  • Balanced Search Tree Query, Asymtotic Analysis..

    - by AGeek
    Hi, The situation is as follows:- We have n number and we have print them in sorted order. We have access to balanced dictionary data structure, which supports the operations serach, insert, delete, minimum, maximum each in O(log n) time. We want to retrieve the numbers in sorted order in O(n log n) time using only the insert and in-order traversal. The answer to this is:- Sort() initialize(t) while(not EOF) read(x) insert(x,t); Traverse(t); Now the query is if we read the elements in time "n" and then traverse the elements in "log n"(in-order traversal) time,, then the total time for this algorithm (n+logn)time, according to me.. Please explain the follow up of this algorithm for the time calculation.. How it will sort the list in O(nlogn) time?? Thanks.

    Read the article

  • Copy Data from One SQL Server Table to Other in Same Database Without Specifying Columns

    - by Scott
    In SQL Server there is the ability to INSERT all of the data from one table into another using the following statement: INSERT INTO TABLE1 SELECT * FROM TABLE2 When running this on a table with an identity column, even though we have run the command SET IDENTITY_INSERT TABLE1 ON, we get the error: An explicit value for the identity column in table can only be specified when a column list is used and IDENTITY_INSERT is ON This implies that we would have to list off all of the columns within the INSERT in order for this to work properly. In the situation that we are in, we do not have access to the column names, only the list of the tables we need to copy over. Is there any way to get around this limitation?

    Read the article

  • is putting N in front of strings in scripts considered a "best practice"?

    - by jcollum
    Let's say I have a table that has a varchar field. If I do an insert like this: INSERT MyTable SELECT N'the string goes here' Is there any fundamental difference between that and: INSERT MyTable SELECT 'the string goes here' My understanding was that you'd only have a problem if the string contained a Unicode character and the target column wasn't unicode. Other than that, SQL deals with it just fine and converts the string with the N'' into a varchar field (basically ignores the N). I was under the impression that N in front of strings was a good practice, but I'm unable to find any discussion of it that I'd consider definitive. Title may need improvement, feel free.

    Read the article

  • Why does Cache.Add return an object that represents the cached item?

    - by Pure.Krome
    From MSDN about the differences between Adding or Inserting an item the ASP.NET Cache: Note: The Add and Insert methods have the same signature, but there are subtle differences between them. First, calling the Add method returns an object that represents the cached item, while calling Insert does not. Second, their behavior is different if you call these methods and add an item to the Cache that is already stored there. The Insert method replaces the item, while the Add method fails. [emphasis mine] The second part is easy. No question about that. But with the first part, why would it want to return an object that represents the cached item? If I'm trying to Add an item to the cache, I already have/know what that item is? I don't get it. What is the reasoning behind this?

    Read the article

  • How is fseek() implemented in the filesystem?

    - by pajton
    This is not a pure programming question, however it impacts the performance of programs using fseek(), hence it is important to know how it works. A little disclaimer so that it doesn't get closed. I am wondering how efficient it is to insert data in the middle of the file. Supposing I have a file with 1MB data and then I insert something at the 512KB offset. How efficient would that be compared to appending my data at the end of the file? Just to make the example complete lets say I want to insert 16KB of data. I understand the answer varies depending on the filesystem, however I assume that the techniques used in common filesystems are quite similar and I just want to get the right notion of it.

    Read the article

  • Does std::multiset guarantee insertion order?

    - by Naveen
    I have a std::multiset which stores elements of class A. I have provided my own implementation of operator< for this class. My question is if I insert two equivalent objects into this multiset is their order guaranteed? For example, first I insert a object a1 into the set and then I insert an equivalent object a2 into this set. Can I expect the a1 to come before a2 when I iterate through the set? If no, is there any way to achieve this using multiset?

    Read the article

  • PHP: Simulate multiple MySQL connections to same database

    - by Varun
    An insert query is constantly getting logged in my MySQL slow query log. I want to see how much time the INSERT query is taking with 100 simultaneous insert operations(to the same table). So I need to simulate the follwoing. 500 different simultaneous connections from PHP to the same database on a mysql server, all of which are inserting a row(simultaneously) to the same table. DO I need to use any load testing tool? Or Can I simply write a PHP script to do this? Any thoughts?

    Read the article

  • Attributes passed to .build() dont show up in the query

    - by Sebastian
    Hi there guys! Hope your all enjoying your hollydays. Ive run into a pretty funny problem when trying to insert rows into a really really simple database table. The basic idea is pretty simple. The user selects one/multiple users in a multiselect which are supposed to be added to a group. This piece of code will insert a row into the user_group_relationships table, but the users id always @group = Group.find(params[:group_id]) params[:newMember][:users].each do |uid| # For debugging purposes. puts 'Uid:'+uid @rel = @group.user_group_relationships.build( :user_id => uid.to_i ) @rel.save end The user id always gets inserted as null even though it is clearly there. You can see the uid in this example is 5, so it should work. Uid:5 ... SQL (0.3ms) INSERT INTO "user_group_relationships" ("created_at", "group_id", "updated_at", "user_id") VALUES ('2010-12-27 14:03:24.331303', 2, '2010-12-27 14:03:24.331303', NULL) Any ideas why this fails?

    Read the article

  • running a stored procedure inside a sql trigger

    - by Ying
    Hi all, the business logic in my application requires me to insert a row in table X when a row is inserted in table Y. Furthermore, it should only do it between specific times(which I have to query another table for). I have considered running a script every 5 minutes or so to check this, but I stumbled upon triggers and figured this might be a better way to do it. But I find the syntax for procedures a little bewildering and I keep getting an error I have no idea how to fix. Here is where I start: CREATE TRIGGER reservation_auto_reply AFTER INSERT ON reservation FOR EACH ROW BEGIN IF NEW.sent_type = 1 /* In-App */ THEN INSERT INTO `messagehistory` (`trip`, `fk`, `sent_time`, `status`, `message_type`, `message`) VALUES (NEW.trip, NEW.psk, 'NOW()', 'submitted', 4, 'This is an automated reply to reservation'); END; I get an error in the VALUES part of the statmenet but im not sure where. I still have to query the other table for the information I need, but I can't even get past this part. Any help is appreciated, including links to many examples..Thanks

    Read the article

  • Fastest way for inserting very large number of records into a Table in SQL

    - by Irchi
    The problem is, we have a huge number of records (more than a million) to be inserted into a single table from a Java application. The records are created by the Java code, it's not a move from another table, so INSERT/SELECT won't help. Currently, my bottleneck is the INSERT statements. I'm using PreparedStatement to speed-up the process, but I can't get more than 50 recods per second on a normal server. The table is not complicated at all, and there are no indexes defined on it. The process takes too long, and the time it takes will make problems. What can I do to get the maximum speed (INSERT per second) possible? Database: MS SQL 2008. Application: Java-based, using Microsoft JDBC driver.

    Read the article

  • loop in simplexml load file

    - by shantanuo
    I am using the following code to parse the XML data to MySQL table and it is working as expected. <?php $sxe = simplexml_load_file("$myfile"); foreach($sxe->AUTHAD as $sales) { $authad="insert into test.authad values ('".mysql_real_escape_string($sales->LOCALDATE)."','".mysql_real_escape_string($sales->LOCALTIME)."','".mysql_real_escape_string($sales->TLOGID)."','" ... ?> The problem is that when I get a new xml file with different format, I can not use the above insert into statement and have to change the parameters manually. For e.g. $authadv_new="insert into test.authad values ('".mysql_real_escape_string($sales->NEW1)."','".mysql_real_escape_string($sales->NEW2)."','".mysql_real_escape_string($sales->NEW3)."','" ... Is there any way to automate this? I mean the PHP code should be able to anticipate the parameters and generate the mysql_real_escape_string($sales-NEW1) to NEW10 values using a loop.

    Read the article

  • Most efficient way to maintain a 'set' in SQL Server?

    - by SEVEN YEAR LIBERAL ARTS DEGREE
    I have ~2 million rows or so of data, each row with an artificial PK, and two Id fields (so: PK, ID1, ID2). I have a unique constraint (and index) on ID1+ID2. I get two sorts of updates, both with a distinct ID1 per update. 100-1000 rows of all-new data (ID1 is new) 100-1000 rows of largely, but not necessarily completely overlapping data (ID1 already exists, maybe new ID1+ID2 pairs) What's the most efficient way to maintain this 'set'? Here are the options as I see them: Delete all the rows with ID1, insert all the new rows (yikes) Query all the existing rows from the set of new data ID1+ID2, only insert the new rows Insert all the new rows, ignore inserts that trigger unique constraint violations Any thoughts?

    Read the article

  • Package soap_api with several layer tag hierarchy

    - by Celta
    Hello, I've seen in this topic (http://stackoverflow.com/questions/37586/consuming-web-services-from-oracle-pl-sql) that it's possible to modify the soap_api package from oracle (http://www.oracle-base.com/articles/9i/ConsumingWebServices9i.php) to :"extended the SOAP_API package to allow the client code to arbitrarily insert an extra tag. So you can insert a sub-level such as , continue to build the request, and remember to insert a closing tag". I'm interrested for this application. Do you know if this package exists ? Can somebody send me the code ? Thank.

    Read the article

  • Inserting a Date and Time value to a MySql from PHP

    - by DomingoSL
    Hello i want to send to a MySQL data base a date and time in a format compatible with the mysql DateTime format, wich is: 0000-00-00 00:00:00 ... Im using this code who brings the date and time from a PHP command: $insert = "INSERT INTO sms (ref, texto, fecha) VALUES ('".$_POST['usuario']."', '".$_POST['sms']."', '".date(DATE_ATOM, mktime(0, 0, 0, 7, 1, 2000))."')"; $add_member = mysql_query($insert); I really want to do this but with a MySQL command, i mean, using the Date and Time from the mysql server. Thanks

    Read the article

  • How can I treat a sequence value like a generated key?

    - by Jeff Knecht
    Here is my situation and my constraints: I am using Java 5, JDBC, and DB2 9.5 My database table contains a BIGINT value which represents the primary key. For various reasons that are too complicated to go into here, the way I insert records into the table is by executing an insert against a VIEW; an INSTEAD OF trigger retrieves the NEXT_VAL from a SEQUENCE and performs the INSERT into the target table. I can change the triggers, but I cannot change the underlying table or the general approach of inserting through the view. I want to retrieve the sequence value from JDBC as if it were a generated key. Question: How can I get access to the value pulled from the SEQUENCE. Is there some message I can fire within DB2 to float this sequence value back to the JDBC driver?

    Read the article

  • problem with sql statement using vb.net

    - by newBie
    hi i got some problem i got this 3 line statement: Dim infoID As Integer = objCommand1.ExecuteScalar() Dim strSQL2 As String = "insert into feedBackHotel (infoID, feedBackView) values(" + infoID + ",'" + FeedBack + "')" Dim objCommand2 As New SqlCommand(strSQL2, conn) objCommand2.ExecuteNonQuery() the problem here is..the strSQL2 can capture the infoID from previous statement, but it didnt insert into the database. it will pop out this error "Conversion from string "insert into feedBackHotel (infoI" to type 'Double' is not valid." in both table related, use same data type (int) but for the feedBackHotel's infoID i allow it to be null..bcoz if i make it not null, it will show another error.. im using vb.net ..can anyone help?

    Read the article

  • SqlAlchemy hangs after adding record in MS SQL

    - by Patrick
    I'm running SQLAlchemy on Jython and trying to connect to a MS SQL database using jTDS with windows authentication. I can query and delete just fine but when I try to insert new values it will hang when I commit. int 'before add' session.add(newVal) print 'after add' session.commit() print 'after commit' I see the first two print statements but not the last. My CPU maxes out and I can't even query the table directly using the MS SQL Management Studio. When I kill the Jython java process I can query again but the new values haven't been added. Strangely enough I can insert values directly using an SQL command: insert_sql = "INSERT INTO my_table (my_value) VALUES ('test_value')" session.execute(insert_sql) session.commit() Any ideas what I'm doing wrong?

    Read the article

  • Best Practice for loading non-existent data

    - by Aizotu
    I'm trying to build a table in MS SQL 2008, loaded with roughly 50,000 rows of data. Right now I'm doing something like: Create Table MyCustomData ( ColumnKey Int Null, Column1 NVarChar(100) Null, Column2 NVarChar(100) Null Primary Key Clustered ( ColumnKey ASC ) WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ) CREATE INDEX IDX_COLUMN1 ON MyCustomData([COLUMN1]) CREATE INDEX IDX_COLUMN2 ON MyCustomData([COLUMN2]) DECLARE @MyCount Int SET @MyCount = 0 WHILE @MyCount < 50000 BEGIN INSERT INTO MyCustomData (ColumnKey, Column1, Column2) Select @MyCount + 1, 'Custom Data 1', 'Custom Data 2' Set @MyCount = @MyCount + 1 END My problem is that this is crazy slow. I thought at one point I could create a Select Statement to build my custom data and use that as the datasource for my Insert Into statement. i.e. something like INSERT INTO MyCustomData (ColumnKey, Column1, Column2) From (Select Top 50000 Row_Count(), 'Custom Data 1', 'Custom Data 2') I know this doesn't work, but its the only thing I can show that seems to provide an example of what I'm after. Any suggestions would be greatly appriciated.

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >