Search Results

Search found 31931 results on 1278 pages for 'sql statement'.

Page 545/1278 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Debugging (displaying) SQL command sent to the db by SQLAlchemy

    - by morpheous
    I have an ORM class called Person, which wraps around a person table: After setting up the connection to the db etc, I run the ff statement. people = session.query(Person).all() The person table does not contain any data (as yet), so when I print the variable people, I get an empty list. I renamed the table referred to in my ORM class People, to people_foo (which does not exist). I then run the script again. I was surprised that no exception was thrown when attempting to access a table that does not exist. I therefore have the following 2 questions: How may I setup SQLAlchemy so that it propagates db errors back to the script? How may I view (i.e. print) the SQL that is being sent to the db engine If it helps, I am using PostgreSQL as the db

    Read the article

  • How to use php variables (arrays) in mysql select statements?

    - by davidconn
    Hi everybody, How do you use a php variable (array) inside a mysql select statement? I am designing an auction site and currently working on a page that lets people view a list of all the current bids for an item. I want to display 3 columns: amountbid - the amount each bidder has bid for the item (held in tblbid) bidderid - the id of each bidder who has bid (found in tbluser) total_positivity_feedback - how many users have left positive feedback for the bidder (calculated from tblfeedback) To find the 'amount' and the 'bidderid' columns i pass the essayid URL parameter from the previous page. This works fine. Despite this, i can't display the total_positivity_feedback column for the various users who have made each bid. My mysql query looks like this: select tblbid.bidderid, tblbid.amount, (select count(tblfeedback.feedbackid) from tblfeedback WHERE tblfeedback.writerid = "ARRAY VARIABLE GOES HERE") AS total_positivity_feedback FROM tblbid WHERE tblbid.essayid = $essayid_bids I assume that the only way to accomplish this is to make the variable contain the bidderid's of those people who bid for that particular essay? I can't seem to work out how to do this tho?!? Many thanks for your help

    Read the article

  • Stairway to Transaction Log Management in SQL Server, Level 6: Managing the Log in BULK_LOGGED Recovery Model

    A DBA may consider switching a database to the BULK_LOGGED recovery model in the short term during, for example, bulk load operations. When a database is operating in the BULK_LOGGED model these, and a few other operations such as index rebuilds, can be minimally logged and will therefore use much less space in the log NEW! Never waste another weekend deployingDeploy SQL Server changes and ASP .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try it now.

    Read the article

  • Seeking assistance with Escaping Data for MySQL queries

    - by JM4
    Please don't send me a link to php.net referencing mysql_real_escape_string as the only response. I have read through the page and while I understand the general concepts, I am having some trouble based on how my INSERT statement is currently built. Today, I am using the following: $sql = "INSERT INTO tablename VALUES ('', '$_SESSION['Member1FirstName'], '$_SESSION['Member1LastName'], '$_SESSION['Member1ID'], '$_SESSION['Member2FirstName'], '$_SESSION['Member2LastName'], '$_SESSION['Member2ID'] .... and the list goes on for 20+ members with some other values entered. It seems most people in the examples already have all their data stored in an array. On my site, I accept form inputs, action="" is set to self, php validation takes place and if validation passes, data is stored into SESSION variables on page 2 then redirected to the next page in the process (page 3) (approximately 8-10 pages in the whole process). thanks!

    Read the article

  • Multiple column Union Query without duplicates

    - by Adam Halegua
    I'm trying to write a Union Query with multiple columns from two different talbes (duh), but for some reason the second column of the second Select statement isn't showing up in the output. I don't know if that painted the picture properly but here is my code: Select empno, job From EMP Where job = 'MANAGER' Union Select empno, empstate From EMPADDRESS Where empstate = 'NY' Order By empno The output looks like: EMPNO JOB 4600 NY 5300 MANAGER 5300 NY 7566 MANAGER 7698 MANAGER 7782 MANAGER 7782 NY 7934 NY 9873 NY Instead of 5300 and 7782 appearing twice, I thought empstate would appear next to job in the output. For all other empno's I thought the values in the fields would be (null). Am I not understanding Unions correctly, or is this how they are supposed to work? Thanks for any help in advance.

    Read the article

  • Determining child count of path

    - by sqlnewbie
    I have a table whose 'path' column has values and I would like to update the table's 'child_count' column so that I get the following output. path | child_count --------+------------- | 5 /a | 3 /a/a | 0 /a/b | 1 /a/b/c | 0 /b | 0 My present solution - which is way too inefficient - uses a stored procedure as follows: CREATE FUNCTION child_count() RETURNS VOID AS $$ DECLARE parent VARCHAR; BEGIN FOR parent IN SELECT path FROM my_table LOOP DECLARE tokens VARCHAR[] := REGEXP_SPLIT_TO_ARRAY(parent, '/'); str VARCHAR := ''; BEGIN FOR i IN 2..ARRAY_LENGTH(tokens, 1) LOOP UPDATE my_table SET child_count = child_count + 1 WHERE path = str; str := str || '/' || tokens[i]; END LOOP; END; END LOOP; END; $$ LANGUAGE plpgsql; Anyone knows of a single UPDATE statement that does the same thing?

    Read the article

  • Count number of results in a View

    - by Jukebox
    I need to count how many people belong in pre-defined groups (this is easy to do in SQL using the SELECT COUNT statement). My Views query runs fine and displays the actual data in my table, but I simply need to know how many results it found. However there doesn't seem to be a COUNT option in views. I am guessing I am going to have to use some sort of views hook, and then stick the result in the table. Here's a quick example of what i'm trying to achieve: My Table ---------------------- Group A | 20 people Group B | 63 people and so on. (I've tried using the Views_Calc module, but I get errors because it is not quite stable yet.) Anybody know of an easy way to count results in Views?

    Read the article

  • For LinqToSQL is 0 true or is 1 (for type Bit)?

    - by Vaccano
    I have a column of type Bit (called BBoolVal in this example). I have a LinqToSQL Statement Like this: var query = List<MyClass> myList = _ctx.DBList .Where(x => x.AGuidID == paramID) .Where(x => x.BBoolVal == false); When I look at the sql it ends up like this (I added the spacing and changed the names): SELECT [t0].[Id], [t0].[AGuidID], [t0].[OtherIDID], [t0].[TimeColumn], [t0].[BBoolVal], [t0].[MoreID] FROM [dbo].[MyTable] AS [t0] WHERE (NOT ([t0].[BBoolVal] = 1)) AND ([t0].[AGuidID] = @p0) Because my x.BBoolVal == false translates to [BBoolVal] == 1 I gather that false = 1 (and thus true = 0). I am asking because this seems a bit backwards to me. I am fine to accept it, I just want to be sure.

    Read the article

  • SQL to LINQ translating probem

    - by ognjenb
    I have problem with convertion this SQL statement to LINQ : SELECT a.Id, a.Name, a.ArtiklNumber, a.Notes, a.Weight, l.StartDate AS LastStartDate, l.LocationNameId, loc.Name AS CarrentLocation, a.Reserved FROM Accessories a LEFT OUTER JOIN Location l LEFT JOIN LocationName loc ON l.LocationNameId = loc.Id ON a.Id = (SELECT AccessoriesId FROM Location WHERE AccessoriesId = a.Id HAVING MAX(StartDate) = StartDate ) This is part of my translated code: testEntities6 accessoriesEntities = new testEntities6(); var max_StartDate = (from msd in accessoriesEntities.location from d in accessoriesEntities.device where msd.DeviceId == d.Id select msd.StartDate).Min(); var accessories_query = from accs in accessoriesEntities.accessories join l in accessoriesEntities.location on accs.Id equals l.AccessoriesId join loc in accessoriesEntities.locationname on l.LocationNameId equals loc.Id select new AccessoriesModel { //Accessories Id = accs.Id, Name = accs.Name, ArtiklNumber = accs.ArtiklNumber, Notes = accs.Notes, Weight = accs.Weight, Reserved = accs.Reserved, //Location LocationNameId = l.LocationNameId, StartDate = max_StartDate,//l.StartDate, //Locationname Loc_name = loc.Name };

    Read the article

  • SP: Random Records, Fave Record, Plus Known Record, NO repetition.

    - by Munklefish
    Hi, Thanks to help from a lot of you guys ive been given the following code which works great. However ive realised ive missed an important bit of info out of the question and so have reposted here (with updated code) to clarify. The following code gets 5 random records from a table plus a further single record based on the users favourite as identified in a second table: CREATE PROCEDURE web.getRandomCharities ( @tmp_ID bigint --members ID ) AS BEGIN WITH q AS ( SELECT TOP 5 * FROM TBL_CHARITIES WHERE cha_Active = 'TRUE' AND cha_Key != '1' ORDER BY NEWID() ) SELECT * FROM q UNION ALL SELECT TOP 1 * FROM ( SELECT * FROM TBL_CHARITIES WHERE TBL_CHARITIES.cha_Key IN ( SELECT members_Favourite FROM TBL_MEMBERS WHERE members_Id = @tmp_ID ) EXCEPT SELECT * FROM q ) tc END However, i realised i also need to include the record where "cha_Key == '1'" if it isnt the same as the record returned in the second SELECT statement in the code shown above. HOpe that makes sense? THANKS!!!

    Read the article

  • DataSet.GetChanges - Save the updated record in a different table than the source one

    - by John Dev
    I'm doing operation on a dataset containing data from a sql table named Test_1 and then get the updated records using the DataSet.GetChanges(DataRowState.Modified) function. Then i try to save the dataset containing the updated records on a different table than the source one (the table is names Test and has the same structure as Test_1) using the following statement: sqlDataAdapter.Update(changesDataSet,"Test"); I'm getting the following error : Update unable to find TableMapping['Test'] or DataTable 'Test' I'm new to ado.net and don't even know if it"s something possible. Any advice is welcome. Just to provide a bit of context. ETL jobs are importing data in temp table with same structure as the original but with _jobid suffix. Right after a rule engine is doing validation before updating the original table.

    Read the article

  • Getting Results from a Web SQL database

    - by andrew8088
    I'm playing around with the new Web SQL databases. Is there a way to return results from a SELECT statement? Here's my example: function getTasks (list) { db.transaction(function (tx) { list = list || 'inbox'; tx.executeSql("SELECT * FROM tasklist WHERE list = ?", [list], function (tx, results) { var retObj = [], i, len = results.rows.length; for ( i = 0; i < len; i++ ) { retObj[i] = results.rows.item(i); } return retObj; }); }); } The getTasks function is returning before the success callback does; is there a way to get the results out of the executeSql method, or do I have to do all the processing within the callback?

    Read the article

  • Why am I getting a MySQL error?

    - by John Hoffman
    Here is my query. Its intention is allow access to properties of the animals that constitute a match of two animals. The match table contains columns for animal1ID and animal2ID to store which animals constitute the match. SELECT id, (SELECT * FROM animals WHERE animals.id=matches.animal1ID) AS animal1, (SELECT * FROM users WHERE animals.id=matches.animalID) AS animal2 FROM matches WHERE id=5 However, MySQl returns this error: Operand should contain 1 column(s). Why? Is there an alternative way to do this, perhaps with a JOIN statement?

    Read the article

  • Query "where clause" fails when calling a function

    - by guest1
    Hi All, I have a function in Access VBA that takes four parameters.The fourth parameter is a "where clause" that I use in an SQL statement inside the function. The function fails when I include the fourth parameter (where clause). When I remove this fourth parameter, the function just works fine. I am not sure if there is anything wrong with the syntax of the fourth parameter ? Please help. here is the function as called in the Query FunctionA('Table1','Field1',0.3,'Field2=#' & [Field2] & '# and Value3="' & [Value3] & '"') AS Duration_Field

    Read the article

  • SQLite delete the last 25% of records in a database.

    - by Steven smethurst
    I am using a SQLite database to store values from a data logger. The data logger will eventually fills up all the available hard drive space on the computer. I'm looking for a way to remove the last 25% of the logs from the database once it reaches a certain limit. Using the following code: $ret = Query( 'SELECT id as last FROM data ORDER BY id desc LIMIT 1 ;' ); $last_id = $ret[0]['last'] ; $ret = Query( 'SELECT count( * ) as total FROM data' ); $start_id = $last_id - $ret[0]['total'] * 0.75 ; Query( 'DELETE FROM data WHERE id < '. round( $start_id, 0 ) ); A journal file gets created next to the database that fills up the remaining space on the drive until the script fails. How/Can I stop this journal file from being created? Anyway to combined all three SQL queries in to one statement?

    Read the article

  • filestream restore very slow to DR server

    - by Jim
    We are backing up a database containing 100GB of filestream data. The backup takes under two hours to write. Restoring the same database to the DR environment is taking over forty hours. We don't have the same problem with other (non-filestream) databases, including some that are much larger. How do we try and get to the bottom of the problem. The database is in full recovery mode, and we are doing a full backup. Thanks.

    Read the article

  • MS Access Premiere Products Exercise

    - by rynwtts
    I am working with Microsoft Access, Premiere Products Exercises for a college course. I can't seem to get past a specific question. We are working with DBDL and E-R Diagrams. The question is here. Indicate the changes you need to make to the design of the Premiere Products database to support the following situation. A customer is not necessarily represented by a single sales rep but can be represented by several sales reps. when a customer places an order, the sales rep who gets the commission on the order must be one of the collection of sales reps who represents the customer. In the database already each customer is represented by a sales rep. Which yields a one to one relationship. I need to enable a customer to have several sales reps, and make it so that only those sales rep will be eligible for commission upon each order.

    Read the article

  • DB2 insert performance - How to measure

    - by svrist
    [From stackoverflow] Im trying to find a way to speedup my inserts to a DB2 9.7.1 (ubuntu linux) Im watching vmstat and trying to gather some statistics via the db2 get snapshot commands but im not able to figure out which numbers im looking for to be able to see where the trouble is. I've read lits of stuff like http://www.eggheadcafe.com/software/aspnet/35692526/question-multiple-row-in.aspx, and http://www.ibm.com/developerworks/data/library/tips/dm-0403wilkins/ and tricks like ALTER TABLE lalala APPEND ON works somewhat (the difference between a dd if=/dev/zero and insert is still a factor 10) but I would like to be able to find the counters or other performance indicators that actually show why it makes sense to use those tricks. For example: What is the metric called that shows me that it is buffer pages allocation (FSCR stuff) that is the problem Where do I see that the insert time is hampered by clustered indexes? I find db2top very useful but im still searching for more direct view of "this is your bottleneck" methods

    Read the article

  • MemCached on Windows x64

    - by Django Reinhardt
    This question has previously been asked, but that was a year ago and I wanted to know if there had been any developments since then. Basically we'd like to use a MemCached Server on a Windows Server 2008 R2 machine... which is only x64, obviously. I haven't found any details on a Win64 version of MemCached, but there is still the solution from the previous thread (which I haven't tried yet) to use a bit of software called MemCacheD Manager running MemCached 1.2.6. However, the current version of MCd is 1.4.4 and I was wondering if there had been any improvements since then.

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • Spreadsheet RDBMS

    - by John Nilsson
    I'm looking for a software (or set of software) that will let me combine spreadsheet and database workflows. Data entry in spreadsheet to enable simple entry from clipboard, analysis based on joins, unions and aggregates and pivot/data pilot summaries. So far I've only found either spreadsheets OR db applications but no good combination. OO base with calc for tables doesn't support aggregates f.ex. Google Spreadsheet + Visualizaion API doesn't support unions or joins, zoho db doesn't let me paste from clipboard. Any hints on software that could be used? Basically I'm trying to do some analysis of my personal bank transactions. Problem 1, ETL. The data has to be moved from my bank to a database. My current solution is to manually copy and paste the data into one spread sheet per account from my internet bank. Pains: Not very scriptable. Lots of scrolling to reach the point to paste. Have to apply sorting and formatting to the pasted data each time. Problem 2, analysis. I then want to aggregate the different accounts in one sweep to track transfers per type of transfer over all accounts. The actual aggregation is still unsolved because I can't find a UNION equivalent in the spreadsheets I've tried.

    Read the article

  • many partitions on a single filegroup?¿ does it make sense?

    - by river0
    Hi, I'm designing a datawarehouse solution and I'm a newbie in disk configuration issues, let me explain you. Our storage is spread over 6 storage enlosures having each of them 5 raid-1 disk arrays, and having 2 LUNS defined per each disk array, which makes a total 48 LUNS (this is following Microsoft fast track recommendations for datawarehouse architectures). I would like to partition my data, on other projects I have worked before, we always followed a 1 partition - 1 filegroup rule. On the microsoft fast track recomendations it is advised to create a filegroup and then for that filegroup a data file per each lun... but I pretend to have a week level partitioning... if I apply that rule I think that I'll get too many files and a complex layout. I'm thinking of just creating just one filegroup (with the 48 lun data files), but still create the partitions since I want to keep soem of the benefits of partitions like partition switching... Is this scenario not recommended? What would you suggest?

    Read the article

  • How to find out where or if MYSQL5 logs are stored on a machine WHM/Cpanel

    - by moi
    I have a WHM/Cpanel re-seller hosting account on a virtual private server (Linux). I have root access to the machine via SSH I am trying to locate a file that contains information that will help me to determine which users have accessed what db and from which hosts. I would imagine this kind of data is stored in a log file somewhere. The MySQL page says: The general query log - Established client connections and statements received from clients See: http://dev.mysql.com/doc/refman/5.0/en/server-logs.html It also says: By default, all log files are created in the mysqld data directory. So, I am am NOT asking where are the general query log logs stored, (cos I expect I will get answers saying "it depends") Please help me work out: "How can go about finding out where MySQL general query log logs are stored on a linux machine" Couple of things i've already tried: I looked at /etc/my.cnf it was a tiny file that only contained the following info: [mysqld] skip-bdb skip-innodb set-variable = max_connections=500 safe-show-database ~ ~ I have looked in: /var/lib/mysql/ But I could not see any log-like file names in that directory. Any clues on this would be most welcome.

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >