Search Results

Search found 26283 results on 1052 pages for 'temporary table'.

Page 871/1052 | < Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >

  • performing auditing in java with sql server DB - before and/or after do not get audited

    - by Domingos
    When auditing, sometimes the before value does not get audited, other times the after value does not get audited, other times both values do not get audited at all. After researching, I found out that only values from a specific codes table get audited. the code was: compareCodesTableInteger(audit, int, int, objectBefore, objectAfter, stringDescription, stringCodesTable); I then changed it to: compareCodesTableInteger(audit, int, int, objectBefore, objectAfter, stringDescription, booleanCheck ? stringCodesTableIfTrue : stringCodesTableIfFalse); Description: if objectBefore AND objectAfter are both from stringCodesTableIfTrue OR from stringCodesTableIfFalse, auditing takes place as expected. The problem is: most of the times, objectBefore is from stringCodesTableIfTrue, and objectAfter is from stringCodesTableIfFalse, or vice-versa. In this scenario auditing fails. How do I go around this? Please assist

    Read the article

  • Hibernate Auto-Increment not working

    - by dharga
    I have a column in my DB that is set with Identity(1,1) and I can't get hibernate annotations to work for it. I get errors when I try to create a new record. In my entity I have the following. @GeneratedValue(strategy=GenerationType.IDENTITY, generator="native") @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; But when I try to add a new record I get the following error. Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. I don't want to set IDENTIY_INSERT to ON because I want the identity column in the db to manage the values. The SQL that is run is the following; where you can clearly see the insert. insert into dbo.MemberSelectedOptions (OptionStatusCd, EffectiveDate, TermDate, SelectionStatusDate, SysLstUpdtUserId, SysLstTrxDtm, SourceApplication, GroupId, MemberId, OptionId, SeqNo) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) What am I missing?

    Read the article

  • mysql ON DUPLICATE KEY UPDATE

    - by julio
    Hi-- I'm stuck on a mySQL query using ON DUPLICATE KEY UPDATE. I'm getting the error: mySQL Error: 1062 - Duplicate entry 'hr2461809-3' for key 'fname' The table looks like this: id int(10) NOT NULL default '0', picid int(10) unsigned NOT NULL default '0', fname varchar(255) NOT NULL default '', type varchar(5) NOT NULL default '.jpg', path varchar(255) NOT NULL default '', PRIMARY KEY (id), UNIQUE KEY fname (fname), KEY picid (propid) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; And the query that's breaking is this: INSERT INTO images SET picid=732, fname='hr2461809-3', path='pictures/' ON DUPLICATE KEY UPDATE picid=732, fname='hr2461809-3', path='pictures/' I'm using a very similar query elsewhere in the app with no issues. I'm not sure why this one breaks. I expected that when the UNIQUE KEY on fname gets violated, that it would simply update the row where the violation occurred? Thanks for any help

    Read the article

  • How can I clean up this SELECT query?

    - by Cruachan
    I'm running PHP 5 and MySQL 5 on a dedicated server (Ubuntu Server 8.10) with full root access. I'm cleaning up some LAMP code I've inherited and I've a large number of SQL selects with this type of construct: SELECT ... FROM table WHERE LCASE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE( strSomeField, ' ', '-'), ',', ''), '/', '-'), '&', ''), '+', '') ) = $somevalue Ignoring the fact that the database should never have been constructed to require such a select in the first place, and the $somevalue field will need to be parameterised to plug the gaping security hole, what is my best option for fixing the WHERE condition into something less offensive? If I was using MSSQL or Oracle I'd simply put together a user-defined function, but my experience with MySQL is more limited and I've not constructed a UDF with it before, although I'm happy coding C. Update: For all those who've already raised their eyebrows at this in the original code, $somevalue is actually something like $GET['product']—there are a few variations on the theme. In this case the select is pulling the product back from the database by product name—after stripping out characters so it matches what could be previously passed as a URI parameter.

    Read the article

  • Sort database entries via a dropdown list

    - by Lin
    Hello! I'm curious if anyone could possibly help me, because I can't find anything exactly related to it anywhere, and it's driving me nuts. I'd like to have a dropdown list on a page, that will give the visitor the option to sort all entries by year. I have entries from i.e. 2001, 2005, 2009, 2010. The years should be displayed in the dropdown, so the visitor can easily just select all entries dated 2001 if they want. The year for each entry is located in the one database table I have. In other words, I simply want a "sort by" dropdown that you can see on pretty much any shopping site nowadays. But with set years. Thanks in advance for any replies!

    Read the article

  • CakePHP hasAndBelongsToMany (HABTM) Delete Joining Record

    - by Jason McCreary
    I have a HABTM relationship between Users and Locations. Both Models have the appropriate $hasAndBelongsToMany variable set. When I managing User Locations, I want to delete the association between the User and Location, but not the Location. Clearly this Location could belong to other users. I would expect the following code to delete just the join table record provided the HABTM associations, but it deleted both records. $this->Weather->deleteAll(array('Weather.id' => $this->data['weather_ids'], false); However, I am new to CakePHP, so I am sure I am missing something. I have tried setting cascade to false and changing the Model order with User, User-Weather, Weather-User. No luck. Thanks in advance for any help.

    Read the article

  • SQL Query Question ROW_CONCAT

    - by DaveC
    Hello Guy, I have been stuck on this problem for quite awhile... I hope someone out there can give me a hand. The following table is in my database: Product_ID Color Type 1 Red Leather 1 Silver Metal 1 Blue Leather 2 Orange Metal 2 Purple Metal I am trying to get the following output: Product_ID Type Color 1 Leather Red, Blue 1 Metal Silver 2 Metal Orange, Purple I know it has to do with some kind of double group by and a group_concat.... have been looking at this for an hour without figuring it out. Any help is much appreciated!!!

    Read the article

  • Python: Time a code segment for testing performance (with timeit)

    - by Mestika
    Hi, I've a python script which works just as it should but I need to write the time for the execution. I've gooled that I should use timeit but I can't seem to get it to work. My Python script looks like this: import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans myfile = open("results_update.txt", "a") for r in range(100): rannumber = random.randint(0, 100) update = "update TABLE set val = %i where MyCount >= '2010' and MyCount < '2012' and number = '250'" % rannumber #print rannumber conn = ibm_db.pconnect("dsn=myDB","usrname","secretPWD") for r in range(5): print "Run %s\n" % r ibm_db.execute(query_stmt) query_stmt = ibm_db.prepare(conn, update) myfile.close() ibm_db.close(conn) What I need it the time it takes the execution of the query and written to the file "results_update.txt". The purpose is to test an update statement for my database with different indexes and tuning mechanisms. Sincerely Mestika

    Read the article

  • Dynamic upsert in postgresql

    - by Daniel
    I have this upsert function that allows me to modify the fill_rate column of a row. CREATE FUNCTION upsert_fillrate_alarming(integer, boolean) RETURNS VOID AS ' DECLARE num ALIAS FOR $1; dat ALIAS FOR $2; BEGIN LOOP -- First try to update. UPDATE alarming SET fill_rate = dat WHERE equipid = num; IF FOUND THEN RETURN; END IF; -- Since its not there we try to insert the key -- Notice if we had a concurent key insertion we would error BEGIN INSERT INTO alarming (equipid, fill_rate) VALUES (num, dat); RETURN; EXCEPTION WHEN unique_violation THEN -- Loop and try the update again END; END LOOP; END; ' LANGUAGE 'plpgsql'; Is it possible to modify this function to take a column argument as well? Extra bonus points if there is a way to modify the function to take a column and a table.

    Read the article

  • How to generate .json file with PHP?

    - by Srinivas Tamada
    CREATE TABLE Posts { id INT PRIMARY KEY AUTO_INCREMENT, title VARCHAR(200), url VARCHAR(200) } json.php code <?php $sql=mysql_query("select * from Posts limit 20"); echo '{"posts": ['; while($row=mysql_fetch_array($sql)) { $title=$row['title']; $url=$row['url']; echo ' { "title":"'.$title.'", "url":"'.$url.'" },'; } echo ']}'; ?> I have to generate results.json file.

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

  • Performing multiple fetches in Core Data within the same view

    - by yesimarobot
    I have my CD store setup and everything is working. Once my initial fetch is performed, I need to perform several fetches based on calculations using the data from my first fetch. The examples provided from Apple are great, and helped me get everything going but I'm struggling with executing successive fetches. Any suggestions, or tutorial links are appreciated. Table View loads data from CD store. When a user taps a row it pushes a detail view The detail view loads details from CD. [THE ABOVE STEPS ARE ALL WORKING] I perform several calculations on the data fetched in the detail view. I need to then perform several other fetches based on the results of my calculations.

    Read the article

  • CSS - Overlapping divs

    - by Jens Törnell
    I have no code to start with. I want to add 2 divs overlapping on each other and then use the new CSS3 Rotate function. The effect I want to create is shown on this page Requirements I don't want to use images I don't mind using CSS3 It should be easy to align the whole thing in the center (which makes it harder to use position: absolute;). It's going to be content below the boxed content (which makes it harder to use position: absolute;). If it's possible without too much position: absolute; it's better. I prefer table free solutions. Have fun!

    Read the article

  • Can't store Data URI to database without stripping + characters

    - by citizencane
    I am trying to grab a reference to images with src's in URI scheme. An example would be the images on google.com/news. if I alert(escape(saveObj.image)); I get something like below: data%3Aimage/jpeg%3Bbase64%2C/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABQAFADASIAAhEBAxEB/8QAHAAAAQUBAQEAAAAAAAAAAAAABgIDBAUHAQAI/8QAPhAAAgECBAMFBgIGCwEAAAAAAQIDBBEABRIhEzFhBkFRcYEUIjKRobFCwRUjJFKC0QcWJSZiY3Jzg7Lw4f/EABoBAAIDAQEAAAAAAAAAAAAAAAMEAAIFBgH/xAAmEQABBAEEAQMFAAAAAAAAAAABAAIDESEEEjFBBRMisVFhcZGh/9oADAMBAAIRAxEAPwAr7L5pD2gyY5JXEtLGAFY/EU2sR1U2+nXF/pZFKuffViGPW5ximQUEz1cNdPNKms6g8TlWBufDcHyxsdLUmqoYqhiWZ1BYtsSe+/ I pass that from the js file and am using django to get that into a mysql table of type utf8_unicode_ci using modelform.save, but when i examine what's in the database, I see: data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCABQAFADASIAAhEBAxEB/8QAHAAAAQUBAQEAAAAAAAAAAAAABgIDBAUHAQAI/8QAPhAAAgECBAMFBgIGCwEAAAAAAQIDBBEABRIhEzFhBkFRcYEUIjKRobFCwRUjJFKC0QcWJSZiY3Jzg7Lw4f/EABoBAAIDAQEAAAAAAAAAAAAAAAMEAAIFBgH/xAAmEQABBAEEAQMFAAAAAAAAAAABAAIDESEEEjFBBRMisVFhcZGh/9oADAMBAAIRAxEAPwAr7L5pD2gyY5JXEtLGAFY/EU2sR1U2 nXF/pZFKuffViGPW5ximQUEz1cNdPNKms6g8TlWBufDcHyxsdLUmqoYqhiWZ1BYtsSe The key difference is that in my database all of the '+' characters from the original have been stripped and replaced with spaces. Any ideas? I'm going blind trying to figure this out! :P

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • Entity Framework 4 - Delete Object

    - by GibboK
    I have 3 Tables in my DataBase CmsMasterPages CmsMasterPagesAdvSlots (Pure Juction Table) CmsAdvSlots Here a Picture of my EDM: I need find out all objects CmsAdvSlot connected with a CmsMasterPage (it is working in my code posted belove), and DELETE the result (CmsAdvSlot) from the DataBase. My Problem is I am not able to DELETE this Objects when I found theme. Error: The object cannot be deleted because it was not found in the ObjectStateManager. int findMasterPageId = Convert.ToInt32(uxMasterPagesListSelector.SelectedValue); CmsMasterPage myMasterPage = context.CmsMasterPages.FirstOrDefault(x => x.MasterPageId == findMasterPageId); var resultAdvSlots = myMasterPage.CmsAdvSlots; // It is working until here foreach (var toDeleteAdv in resultAdvSlots) { context.DeleteObject(myMasterPage.CmsAdvSlots.Any()); // ERORR HERE!! context.SaveChanges(); } Any idea how to solve it? Thanks for your time! :-)

    Read the article

  • Oracle Triggers Query..

    - by AGeek
    Lets consider a Table STUD and a ROW-Level TRIGGER is implemented over INSERT query.. My scenario goes like this, whenever a row is inserted, a trigger is fired and it should access some script file which is placed in the hard disk, and ultimately should print the result. So, is this thing is possible? and if yes, then this thing should exist in dynamic form, i.e. if we change the content of script file, then the oracle should reflect those changes as well. I have tried doing this for java using External Procedures, but doesn't feel that much satisfied with the result that i wanted. Kindly give your point-of-view for this kind of scenario and ways that this can be implemented.

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • Query filter design for string field

    - by Midhat
    A field in my table can have arbitrary strings. On the UI, there is a drop down having options like All, Value1, Value2 And the results were filtered by the selected option value. So far this is easy and adding new filters to the UI is not a problem. Needs no changes in my stored procedure. Now I want to have an "Others" option here as well, which will return rows not having the column value as Value1 or Value2. Apparently this will require a "not in" operator in my query, and will make maintenance difficult, as the list of values is likely to change Any suggestions, design tips?

    Read the article

  • SQL Server JOIN with optional NULL values

    - by Paul McLoughlin
    Imagine that we have two tables as follows: Trades ( TradeRef INT NOT NULL, TradeStatus INT NOT NULL, Broker INT NOT NULL, Country VARCHAR(3) NOT NULL ) CTMBroker ( Broker INT NOT NULL, Country VARCHAR(3) NULL ) (These have been simplified for the purpose of this example). Now, if we wish to join these two tables on the Broker column, and if a country exists in the CTMBroker table on the Country, we have the following two choices: SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND ISNULL(B.Country, T.Country) = T.Country or SELECT T.TradeRef,T.TradeStatus FROM Trades AS T JOIN CTMBroker AS B ON B.Broker=T.Broker AND (B.COUNTRY=T.Country OR B.Country IS NULL) These are both logically equivalent, however in this specific circumstance for our database (SQL Server 2008, SP1) two different execution plans are produced for these two queries with the second version significantly outperforming the first version in terms of both time and logical reads. My question really is as follows: as a general rule would (2) be preferred to (1), or does this just happen to be exploiting some particular idiosyncracy of the optimiser in 2008 SP1 (that could therefore change with future versions of SQL Server).

    Read the article

  • How do you encode an apostrophe so that it's searchable in mysql?

    - by Yegor
    I don't think that was the most clear question, but an example should make it a little clearer. I have a table filled with movie names, some of which contain apostrophes. I have a search box which is used to find movies. If I perform searches via mov_title = '$search_keywords' it all works, but this method will not yield any results for partial searches, so I have to use this mov_title LIKE '%$search_keywords%' This method works fine for titles that are A-Za-z0-9, but if a title has an apostrophe, it's not able to find the movie, even if I do an exact match. Before the titles are stored in the DB, I put them through this: $search_keywords = htmlspecialchars(mysql_escape_string($_GET["search_keywords"])); So in the DB, there is a forward slash before every single apostrophe. The only way to match a movie title with an apostrophe is to physically put a forward slash in front of the apostrophe, in the search box. This seems so trivial, and I'm sure the solution is painfully obvious, but I'm just not seeing it.

    Read the article

  • What is the best way to reject messages with the same body in AMQ queue?

    - by archer
    I have a single AMQ queue that receives simple messages with string body. Consider I'm sending CLSIDs as message bodies. CLSIDs could be not unique, but I'd like to reject all messages with not unique bodies and keep only single instance of such messages in the queue. Is there any simple way to do it? Currently I'm using a workaround. Messages from the queue are consumed by some processor that tries to insert bodies into a simple DB table with UNIQUE constraint applied to message_body field. If processor inserts the messages succesfuly - it's assigned to exchange.out.body and sent to other queue. If ConstraintViolationException is thrown - nothing is resent to other queue. I would like to know does AMQ support something similar out of the box?

    Read the article

  • making a combined sum of two columns

    - by bsandrabr
    I have a table (apples) containing: cid date_am date_pm ---------------------- 1 1 1 2 2 1 3 1 3 1 1 2 I asked a question earlier (badly) about how I would rank the customers in order of the number of ones(1) they had. The solution was (based on one column): SELECT cid, sum( date_pm ) AS No_of_ones FROM apples WHERE date_am =1 GROUP BY cid ORDER BY no_of_ones DESC This works great for one column but how would I do the same for the sum of the two columns. ie. SELECT cid, sum( date_pm ) AS No_of_ones FROM apples WHERE date_am =1 add to SELECT cid, sum( date_am ) AS No_of_ones FROM apples WHERE date_pm =1 GROUP by cid ORDER by no_of_ones(added) hope I've managed to make that clear enough for you to help -thanks

    Read the article

  • Help me write a nicer SQL query in Rails

    - by Sainath Mallidi
    Hi, I am trying to write an SQL query to update some of the attributes that are regularly pulled from source. the output will be a text file with the following fields: author, title, date, popularity I have two tables to update one is the author information and the other is popularity table. And the Author Active Record object has one popularity. Currently I'm doing it like this.\ arr.each { |x| x = x.split(" ") results = Author.find_by_sql("SELECT authors.id FROM authors, priorities WHERE authors.id=popularity.authors_id AND authors.author = x[0]") results[0].popularity.update_attribute("popularity", x[3]) I need two tables because the popularity keeps changing, and I need only the top 1000 popular ones, but I still need to keep the previously popular ones also. Is there any nicer way to do this, instead of one query for every new object. Thanks.

    Read the article

  • Creating a stored procedure in SQL Server 2008 that will do a "facebook search"

    - by dig
    Hello, I'm trying to implement a facebook search in my system (auto suggest while typing). I've managed to code all the ajax stuff, but I'm not sure how to query the database. I've created a table called People which contains the fields: ID, FirstName, LastName, MiddleName, Email. I've also created a FTS-index on all those fields. I want to create a stored procedure that will get as a parameter the text inserted in the query box and returns the suggestions. For example, When I will write in the textbox the query "Joh Do" It will translate to the query: select * from People where contains(*, '"Joh*"') and contains(*, '"Do*"') Is there a way to do that in stored procedure? P.S I've tried to use the syntax select * from People where contains(*,'"Joh*" and "Do*"') but it didn't returned the expected results, probably because it needs to search the words on different fields. Is there a way to fix that? Thanks.

    Read the article

< Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >