Search Results

Search found 29193 results on 1168 pages for 'sql merge'.

Page 249/1168 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • SQL Stored Queries - use result of query as boolean based on existence of records

    - by Christian Mann
    Just getting into SQL stored queries right now... anyway, here's my database schema (simplified for YOUR convenience): member ------ id INT PK board ------ id INT PK officer ------ id INT PK If you're into OOP, Officer Inherits Board Inherits Member. In other words, if someone is listed on the officer table, s/he is listed on the board table and the member table. I want to find out the highest privilege level someone has. So far my SP looks like this: DELIMITER // CREATE PROCEDURE GetAuthLevel(IN targetID MEDIUMINT) BEGIN IF SELECT `id` FROM `member` WHERE `id` = targetID; THEN IF SELECT `id` FROM `board` WHERE `id` = targetID; THEN IF SELECT `id` FROM `officer` WHERE `id` = targetID; THEN RETURN 3; /*officer*/ ELSE RETURN 2; /*board member*/ ELSE RETURN 1; /*general member*/ ELSE RETURN 0; /*not a member*/ END // DELIMITER ; The exact text of the error is #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT id FROM member WHERE id = targetID; THEN IF SEL' at line 4 I suspect the issue is in the arguments for the IF blocks. What I want to do is return true if the result-set is at least one -- i.e. the id was found in the table. Do any of you guys see anything to do here, or should I reconsider my database design into this:? person ------ id INT PK level SMALLINT

    Read the article

  • Merge Microsoft Word documents with TortoiseSVN

    - by Álvaro G. Vicario
    TortoiseSVN has a nice VBA script that allows to merge Microsoft Word documents using Word builtin change tracking functionality. This way, when I merge changes from a branch into the trunk I can resolve the conflicts in Word documents. However, the feature is not as useful as it could because it doesn't track revision changes; it just compares the two documents as a whole. This way, when I merge a revision where one paragraph was added to the document I'm not offered to review this paragraph. Instead, I have to review all the differences between the source and target documents (including stuff like TOC bookmark names). Is it an inherent limitation I cannot override? Or is it due to the fact that my Word version is pretty old? (I'm using Word 2002). Also, if you know about a magic tool or plugin... ;-)

    Read the article

  • PHP PDO SQL Server Select statement not replacing question marks

    - by Metropolis
    Awhile ago I wrote a database class which uses PDO in order to connect to SQL Server databases and also to MySQL databases. It has always replaced the question marks fine when using it on the MySQL databases, but for the SQL Server database I had to create a work around which basically replaces the question marks manually. Here is the code for that. if($this->getPDODriver() == 'odbc' && !empty($values_a) && substr_count($query_s, "?") > 0) { $query_s = preg_replace(array_fill(0, substr_count($query_s, "?"), '/\?/'), $values_a, $query_s, 1); $values_a = NULL; } Now, I understand that this completely defeats the purpose of the question marks and PDO, but it has been working fine for me. What I would like to do now though, is find out why the question marks are not getting replaced in the first place, and remove this workaround. If I have a select statement like the following SELECT * FROM database WHERE value = ? That is what the query looks like when I go to prepare it, but when I display the query results, it is a blank array. Just remember, this class is working fine with MySQL, and it is working fine with the work around above. So I know it has something to do with the question marks.

    Read the article

  • SQL Server Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using SQL Server 2008 currently. EDIT I want to add that I'm very interested in solutions that will deal with things like commonly misspelled names, eg 'smythe', 'smith', as well as first names, eg 'tomas', 'thomas'. Query Plan |--Parallelism(Gather Streams) |--Nested Loops(Inner Join, OUTER REFERENCES:([testdb].[dbo].[Test].[Id], [Expr1004]) OPTIMIZED WITH UNORDERED PREFETCH) |--Hash Match(Inner Join, HASH:([testdb].[dbo].[Test].[Id])=([testdb].[dbo].[Test].[Id])) | |--Bitmap(HASH:([testdb].[dbo].[Test].[Id]), DEFINE:([Bitmap1003])) | | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_LastName]), SEEK:([testdb].[dbo].[Test].[LastName] >= 'WHITDþ' AND [testdb].[dbo].[Test].[LastName] < 'WHITF'), WHERE:([testdb].[dbo].[Test].[LastName] like 'WHITE%') ORDERED FORWARD) | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_FirstName]), SEEK:([testdb].[dbo].[Test].[FirstName] >= 'THOMARþ' AND [testdb].[dbo].[Test].[FirstName] < 'THOMAT'), WHERE:([testdb].[dbo].[Test].[FirstName] like 'THOMAS%' AND PROBE([Bitmap1003],[testdb].[dbo].[Test].[Id],N'[IN ROW]')) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([testdb].[dbo].[Test].[PK__TEST__3214EC073B95D2F1]), SEEK:([testdb].[dbo].[Test].[Id]=[testdb].[dbo].[Test].[Id]) LOOKUP ORDERED FORWARD) SQL for above: SELECT * FROM testdb.dbo.Test WHERE LastName LIKE 'WHITE%' AND FirstName LIKE 'THOMAS%' Based on advice from Mitch, I created an index like this: CREATE INDEX IX_Test_Name_DOB ON Test (LastName ASC, FirstName ASC, BirthDate ASC) INCLUDE (and here I list the other columns) My searches are now incredibly fast for my typical search (last, first, and birth date).

    Read the article

  • Blob in Java/Hibernate/sql-server 2005

    - by Ramy
    Hi, I'm trying to insert an HTML blob into our sql-server2005 database. i've been using the data-type [text] for the field the blob will eventually live in. i've also put a '@Lob' annotation on the field in the domain model. The problem comes in when the HTML blob I'm attempting to store is larger than 65536 characters. Its seems that is the caracter-limit for a text data type when using the @Lob annotation. Ideally I'd like to keep the whole blob in tact rather than chunk it up into multiple rows in the database. I appreciate any help or insight that might be provided. Thanks! _Ramy Allow me to clarify annotation: @Lob @Column(length = Integer. MAX_VALUE) //per an answer on stackoverflow private String htmlBlob; database side (sql-server-2005): CREATE TABLE dbo.IndustrySectorTearSheetBlob( ... htmlBlob text NULL ... ) Still seeing truncation after 65536 characters... EDIT: i've printed out the contents of all possible strings (only 10 right now) that would be inserted into the Database. Each string seems to contain all cahracters, judging by the fact that the close html tag is present at the end of the string....

    Read the article

  • Are there size limitations to the .NET Assembly format?

    - by McKAMEY
    We ran into an interesting issue that I've not experienced before. We have a large scale production ASP.NET 3.5 SP1 Web App Project in Visual Studio 2008 SP1 which gets compiled and deployed using a Website Deployment Project. Everything has worked fine for the last year, until after a check-in yesterday the app started critically failing with BadImageFormatException. The check-in in question doesn't change anything particularly special and the errors are coming from areas of the app not even changed. Using Reflector we inspected the offending methods to find that there were garbage strings in the code (which .NET Reflector humorously interpreted as Chinese characters). We have consistently reproduced this on several machines so it does not appear to be hardware related. Further inspection showed that those garbage strings did not exist in the Assemblies used as inputs to aspnet_merge.exe during deployment. aspnet_merge.exe / Web Deployment Project Output Assemblies Properties: Merge all outputs to a single assembly Merge each individual folder output to its own assembly Merge all pages and control outputs to a single assembly Create a separate assembly for each page and control output In the web deployment project properties if we set the merge options to the first option ("Merge all outputs to a single assembly") we experience the issue, yet all of the other options work perfectly! My question: does anyone know why this is happening? Is there a size-limit to aspnet_merge.exe's capabilities (the resulting merged DLL is around 19.3 MB)? Are there any other known issues with merging the output of WAPs? I would love it if any Assembly format / aspnet_merge.exe gurus know about any such limitations like this. Seems to me like a 25MB Assembly, while big, isn't outrageous.

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • SQL Express under IIS 7.5

    - by fampinheiro
    I´m developing a web service that access a SQL Express database, it works very well in the Visual Studio host but when i deploy it to IIS 7.5 i get this exception. Please help me. Stack Trace: System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(String host, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) --- End of inner exception stack trace --- at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure) at System.Data.EntityClient.EntityConnection.Open() at System.Data.Objects.ObjectContext.EnsureConnection() at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption) at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at WSCinema.CinemaService.Movie() in D:\Documents\My Dropbox\Projects\sd.v0910\trab3\code\WSCinema\CinemaService.asmx.cs:line 46

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • How do I git reset --hard HEAD on Mercurial?

    - by obvio171
    I'm a Git user trying to use Mercurial. Here's what happened: I did a hg backout on a changeset I wanted to revert. That created a new head, so hg instructed me to merge (back to "default", I assume). After the merge, it told me I still had to commit. Then I noticed something I did wrong when resolving a conflict in the merge, and decided I wanted to have everything as before the hg backout, that is, I want this uncommited merge to go away. On Git this uncommited stuff would be in the index and I'd just do a git reset --hard HEAD to wipe it out but, from what I've read, the index doesn't exist on Mercurial. So how do I back out from this?

    Read the article

  • Is there a 3 way merger tool that “understands” common refactoring?

    - by Ian Ringrose
    When a simple refactoring like “rename field” has been done on one branch it can be very hard to merge the changes into the other branches. (Extract method is much harder as the merge tools don’t seem to match the unchanged blocks well) Now in my dreams, I am thinking of a tool that can record (or work out) what well defined refactoring operations have been done on one branch and then “replay” them on the other branch, rather than trying to merge every line the refactoring has affected. see also "Is there an intelligent 3rd merge tool that understands VB.NET" for the other half of my pain! Also has anyone try something like MolhadoRef (blog article about MolhadoRef and Refactoring-aware SCM), This is, in theory, refactoring-aware source control.

    Read the article

  • Selecting a good SQL Server 2008 spatial index with large polygons

    - by andynormancx
    I'm having some fun trying to pick a decent SQL Server 2008 spatial index setup for a data set I am dealing with. The dataset is polygons, representing contours over the whole globe. There are 106,000 rows in the table, the polygons are stored in a geometry field. The issue I have is that many of the polygons cover a large portion of the globe. This seems to make it very hard to get a spatial index that will eliminate many rows in the primary filter. For example, look at the following query: SELECT "ID","CODE","geom".STAsBinary() as "geom" FROM "dbo"."ContA" WHERE "geom".Filter( geometry::STGeomFromText('POLYGON ((-142.03193662573682 59.53396984952896, -142.03193662573682 59.88928136451884, -141.32743833481925 59.88928136451884, -141.32743833481925 59.53396984952896, -142.03193662573682 59.53396984952896))', 4326) ) = 1 This is querying an area which intersects with only two of the polygons in the table. No matter what combination of spatial index settings I chose, that Filter() always returns around 60,000 rows. Replacing Filter() with STIntersects() of course returns just the two polygons I want, but of course takes much longer (Filter() is 6 seconds, STIntersects() is 12 seconds). Can anyone give me any hints on whether there is a spatial index setup that is likely to improve on 60,000 rows or is my dataset just not a good match for SQL Server's spatial indexing ?

    Read the article

  • SQL command to get field of a maximum value, without making two select

    - by António Capelo
    I'm starting to learn SQL and I'm working on this exercise: I have a "books" table which holds the info on every book (including price and genre ID). I need to get the name of the genre which has the highest average price. I suppose that I first need to group the prices by genre and then retrieve the name of the highest.. I know that I can get the results GENRE VS COST with the following: select b.genre, round(avg(b.price),2) as cost from books b group by b.genre; My question is, to get the genre with the highest AVG price from that result, do I have to make: select aux.genre from ( select b.genre, round(avg(b.price),2) as cost from books b group by b.genre ) aux where aux.cost = (select max(aux.cost) from ( select b.genre, round(avg(b.price),2) as cost from books l group by b.genre ) aux); Is it bad practice or isn't there another way? I get the correct result but I'm not confortable with creating two times the same selection. I'm not using PL SQL so I can't use variables or anything like that.. Any help will be appreciated. Thanks in advance!

    Read the article

  • Tool for generating flat files from SQL objects dynamically

    - by Fabio Gouw
    Hello! I'm looking for a tool or component that generates flat files given a SQL Server's query result (from a stored procedure or a SELECT * over a table or view). This will be a batch process which runs every day, and every day a new file is created. I could use SQL Server Integration Services (DTS), but I have a mandatory requirement: the output of the file need to be dynamic. If a new column is added in my query result, the file must have this new column too, without having to modify my SSIS package. If a column is removed, then the flat file no longer will have it. I’ve tried to do this with SSIS, but when I create a new package I need to specify the number of columns. Another requirement is configuring the format of the output, depending on the data type of the column. If it’s a datetime, the format needs to be YYYY-MM-DD. If it’s a float, then I need to use 2 decimal digits, and so on. Does anyone know a tool that does this job? Thanks

    Read the article

  • SQL Server Compact timed out waiting for a lock

    - by jankhana
    Hi all, I'm having an application in that i use Sql Compact 3.5 with VS2008. I'm running multiple threads in my application which contacts the compact database and accesses the row. It selects and deletes those rows in a fashion i.e selecting and giving to the application 5 rows and deleting those rows from the table. It works great with a single thread but if i use multiple threads i.e if 3 or more threads are running I get very often the TimeOut Error!!! I have increased the Time out property in the connection string but it didn't give me expected result. The error log is as follow: SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 5,Thread id = 4204,Process id = 4808,Table name = XXX,Conflict type = x lock (s blocks),Resource = TAB ] The Query that I use to retrieve is as follows: " select Top(5) * from TableName order by id; delete from TableName where id in(select top(5) id from TableName order by id); " Is there any way by which we can avoid this Time Out exception??????? The above query I un as a transaction in VS2008 one using SQLCECommand and the other using SqlCEDataAdapter. Any Idea!!!!!! Reply

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Improving SQL Code

    - by jeremib
    I'm using Pervasive SQL. I have the following UNION of mulitple SQL statements. Is there a way to clean this up, especially the Pay Date an the Loc No fields that are selected in each statement. Is there a way to pull this out and have only one place to need to change those two fields? ( SELECT '23400' as Gl_Number, y.Plan as Description, 0 as Hours, ROUND(SUM(Ee_Curr),2) as Debit, 0 as Credit FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '72000' as Gl_Number, y.Plan, 0, ROUND(SUM(Er_Curr),2), 0 FROM "PR_YLOC" y LEFT JOIN PR_SUMM s ON (s.Summ_No = y.Summ_No) WHERE y.Loc_No = 1041 AND s.Pay_Date = '2010-04-02' AND y.Code IN (100, 105, 110) AND y.Type = 3 GROUP BY y.Plan ) UNION ( SELECT '24800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 100 GROUP BY c.Plan ) UNION ( SELECT '24800', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 115 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, 0, ROUND(SUM(Ee_Amt),2) FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 241 GROUP BY c.Plan ) UNION ( SELECT '24150', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 239 GROUP BY c.Plan ) UNION ( SELECT '24120', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 230 GROUP BY c.Plan ) UNION ( SELECT '24100', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 225 GROUP BY c.Plan ) UNION ( SELECT '23800', c.Plan, 0, ROUND(SUM(Ee_Amt),2), 0 FROM "PR_CDED" c WHERE Pay_Date = '2010-04-02' AND Loc_No = 1041 AND Code = 245 GROUP BY c.Plan ) UNION ( select m.Def_Dept as Gl_Number, t.Short_Desc, (SELECT SUM(Hours) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Hours, (SELECT SUM(Pay_Amt) FROM pr_earn en WHERE en.Loc_No = e.Loc_No AND en.Emp_No = e.Emp_No AND en.Pay_Date = e.Pay_Date AND en.Pay_Code = e.Pay_Code) as Debit, 0 from pr_earn e left join pr_mast m on (e.Loc_No = m.Loc_No and e.Emp_No = m.Emp_No) left join pr_ptype t ON (t.Code = e.Pay_Code) where e.loc_no = 1041 and e.pay_date = '2010-04-02' group by m.Def_Dept, t.Short_Desc ) Thanks

    Read the article

  • Mercurial Messing Up csproj Files?

    - by alphadogg
    I am using Hg to manage and merge code with three other developers involved in a VS2008 project. We do have an .hgignore file that ignores a fair number of files not necessary to track, such as *.pdb, *.obj, etc. However, we do track .csproj files. Periodically, it would seem that files go missing after a merge. We would get build issues, and have to relocate files which were in the project folders, but not in the csproj file. Eventually, I noted during a merge conflict that sometimes Hg seems to merge incorrectly. Here's a screenshot below. The actual conflict that requires manual intervention is lower in the file. But in this section, hg incorrectly replaces DirectoryTasks.cs with a new, different file called ReportTasks.cs, when in fact, both should be added. How do people manage to avoid this?

    Read the article

  • svn reintegrate a branch with externals fails

    - by dnndeveloper
    using svn 1.6.6 with tortoisesvn 1.6.6 what I am doing: 1) Apply external properties to a folder in the trunk (both single file and folder external, externals are binary files) 2) Create a branch from the trunk and update the entire project 3) Modify a file on the branch and commit the changes, then update the entire project. 4) Merge - "Reintegrate a branch" when I get to the last screen I click "test merge" and get this error: Error: Cannot reintegrate into mixed-revision working copy; try updating first I update the entire project and still the same error. other observations: If I "Merge a range of revisions" everything works fine. If I remove the externals everything works fine using either "Merge a range of revisions" or "Reintegrate a branch" Anyone else having this issue?

    Read the article

  • DataGridView not displaying a row after it is created

    - by joslinm
    Hi, I'm using Visual Studio 10 and I just created a Database using SQL Server CE. Within it, I made a table CSLDataTable and that automatically created a CSLDataSet & CSLDataTableTableAdapter. The three variables were automatically created in my MainWindow.cs class: cSLDataSet cSLDataTableTableAdapter cSLDataTableBindingSource I have added a DataGridView in my Form called dataGridView and datasource cSLDataTableBindingSource. In my MainWindow(), I tried adding a row as a test: public MainWindow() { InitializeComponent(); CSLDataSet.CSLDataTableRow row = cSLDataSet.CSLDataTable.NewCSLDataTableRow(); row.File_ = "file"; row.Artist = "artist11"; row.Album = "album"; row.Save_Structure = "save"; row.Sent = false; row.Error = true; row.Release_Format = "release"; row.Bit_Rate = "bitrate.."; row.Year = "year"; row.Physical_Format = "format"; row.Bit_Format = "bitformat"; row.File_Path = "File!!path"; row.Site_Origin = "what"; cSLDataSet.CSLDataTable.Rows.Add(row); cSLDataSet.AcceptChanges(); cSLDataTableTableAdapter.Fill(cSLDataSet.CSLDataTable); cSLDataTableTableAdapter.Update(cSLDataSet); dataGridView.Refresh(); dataGridView.Update(); } In regards to the DataSet methods I tried calling, I had been trying to find a "correct" way to interact with the adapter, dataset, and datatable to successfully show the row, but to no avail. I'm rather new to using SQL Server CE Database, and read a lot of the MSDN sites & thought I was on the right track, but I've had no luck. The DataGridView shows the headers correctly, but that new row does not show up.

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • Optimal two variable linear regression SQL statement (censoring outliers)

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) <15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here (with five outliers highlighted): Questions How do I return the y value against all rows without repeating the same query to collect and collate the data? That is, how do I "reuse" the list of t values? How would you change the query to eliminate outliers (at an 85% confidence interval)? The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Thank you!

    Read the article

  • Calculate differences between rows while grouping with SQL

    - by Guido
    I have a postgresql table containing movements of different items (models) between warehouses. For example, the following record means that 5 units of model 1 have been sent form warehouse 1 to 2: source target model units ------ ------ ----- ----- 1 2 1 5 I am trying to build a SQL query to obtain the difference between units sent and received, grouped by models. Again with an example: source target model units ------ ------ ----- ----- 1 2 1 5 -- 5 sent from 1 to 2 1 2 2 1 2 1 1 2 -- 2 sent from 2 to 1 2 1 1 1 -- 1 more sent from 2 to 1 The result should be: source target model diff ------ ------ ----- ---- 1 2 1 2 -- 5 sent minus 3 received 1 2 2 1 I wonder if this is possible with a single SQL query. Here is the table creation script and some data, just in case anyone wants to try it: CREATE TEMP TABLE movements ( source INTEGER, target INTEGER, model INTEGER, units INTEGER ); insert into movements values (1,2,1,5); insert into movements values (1,2,2,1); insert into movements values (2,1,1,2); insert into movements values (2,1,1,1);

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >