Search Results

Search found 22897 results on 916 pages for 'query processing'.

Page 363/916 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Microsoft SQL Server 2008 - 99% fragmentation on non-clustered, non-unique index

    - by user550441
    I have a table with several indexes (defined below). One of the indexes (IX_external_guid_3) has 99% fragmentation regardless of rebuilding/reorganizing the index. Anyone have any idea as to what might cause this, or the best way to fix it? We are using Entity Framework 4.0 to query this, the EF queries on the other indexed fields about 10x faster on average then the external_guid_3 field, however an ADO.Net query is roughly the same speed on both (though 2x slower than the EF Query to indexed fields). Table id(PK, int, not null) guid(uniqueidentifier, null, rowguid) external_guid_1(uniqueidentifier, not null) external_guid_2(uniqueidentifier, null) state(varchar(32), null) value(varchar(max), null) infoset(XML(.), null) -- usually 2-4K created_time(datetime, null) updated_time(datetime, null) external_guid_3(uniqueidentifier, not null) FK_id(FK, int, null) locking_guid(uniqueidentifer, null) locked_time(datetime, null) external_guid_4(uniqueidentifier, null) corrected_time(datetime, null) is_add(bit, not null) score(int, null) row_version(timestamp, null) Indexes PK_table(Clustered) IX_created_time(Non-Unique, Non-Clustered) IX_external_guid_1(Non-Unique, Non-Clustered) IX_guid(Non-Unique, Non-Clustered) IX_external_guid_3(Non-Unique, Non-Clustered) IX_state(Non-Unique, Non-Clustered)

    Read the article

  • sql server swap data between rows problem

    - by AmRoSH
    I was asking b4 about swaping query to swap data between rows in same table and i got that qurey ALTER PROCEDURE [dbo].[VehicleReservationsSwap] -- Add the parameters for the stored procedure here (@FirstVehicleID int, @secondVehicleID int, @WhereClause nvarchar(2000)) AS BEGIN Create Table #Temp ( VehicleID int ,VehicleType nvarchar(100) ,JoinId int ) DECLARE @SQL varchar(8000) SET @SQL ='Insert into #Temp (VehicleID,VehicleType,JoinId) SELECT VehicleID,VehicleType,CASE WHEN VehicleID = ' + Cast(@FirstVehicleID as varchar(10)) + ' then ' + Cast(@secondVehicleID as varchar(10)) + ' ELSE ' + Cast(@FirstVehicleID as varchar(10)) + ' END AS JoinId FROM Reservations WHERE VehicleID in ( ' + Cast(@FirstVehicleID as varchar(10)) + ' , ' + Cast(@secondVehicleID as varchar(10)) + ' )' + @WhereClause EXEC(@SQL) --swap values UPDATE y SET y.VehicleID = #Temp.VehicleID ,y.VehicleType = #Temp.VehicleType FROM Reservations y INNER JOIN #Temp ON y.VehicleID = #Temp.JoinId WHERE y.VehicleID in (@FirstVehicleID,@secondVehicleID) Drop Table #Temp END this query take 2 parameters and swaping all rows returned for each parameter. the problem is the query swaps just if each parameter (forign key) has values I need to make swaping in case if one of them has no vlue. I hope if some one can help me in that . Thanks,

    Read the article

  • Is an ORM redundant with a NoSQL API?

    - by Earlz
    Hello, with MongoDB (and I assume other NoSQL database APIs worth their salt) the ways of querying the database are much more simplistic than SQL. There is no tedious SQL queries to generate and such. For instance take this from mongodb-csharp: using MongoDB.Driver; Mongo db = new Mongo(); db.Connect(); //Connect to localhost on the default port. Document query = new Document(); query["field1"] = 10; Document result = db["tests"]["reads"].FindOne(query); db.Disconnect(); How could an ORM even simplify that? Is an ORM or other "database abstraction device" required on top of a decent NoSQL API?

    Read the article

  • count(*) is it really expensive ?

    - by Anil Namde
    I have a page where i have 4 tabs displaying 4 different reports based of different tables. Now i get row count of each tabled using select count() from table query and display number of rows available in each table on the tabs. Now with each page post back 5 count() queries are executed (4 to get counts and 1 for pagination) and 1 query for getting report. Now my question is should is count(*) query really expensive that i should keep the row counts (at least which are displayed on tab) in view state of page instead of queering each time? How much expensive it is ?

    Read the article

  • How to detect pending system shutdown on Linux?

    - by Rajorshi
    Hi, I am working on an app where I need to detect system shutdown. However, I have not found any reliable way get a notification on this event. I know that on shutdown, my app will receive a SIGTERM signal followed by a SIGKILL. What I want to know is if there is someway to query if a SIGTERM is part of a shutdown sequence? Does any one know if there is a way to query that programmatically (C API)? As far as I know, the system does not provide any other method to query for an impending shutdowm. If it does, that would solve my problem as well. I have been trying out runlevels as well, but change in runlevels seem to be instantaneous and without any prior warnings.

    Read the article

  • Loading form values from one IFrame to another

    - by Roland
    What I want to achieve is the following. A search is made from one IFrame "the form is loaded into this frame via the src atribute of iframe" the search query is then passed to another IFrame that redirects to a url with the query eg. www.test.com/index.php?query=test Is this possible? Currently my code looks as such <iframe src="abc.php" name="iframe1"> </iframe> <iframe name="iframe2"> <?php var_dump($_GET); ?> </iframe> abc.php contains the following <form method="get" action="#" target="iframe2"> <input type="text" name="searchtype" id="searchtype" /> <input type="submit" value="submit"> </form>

    Read the article

  • DB2: Won't allow parameterize fetch first X rows only

    - by Guy Roth
    Although in Oracle DB its is allowed to parametrize the number of rows that the query can fetch by adding to the query: select ... from ... where ... and rownum <= @MaximumRecords I can't add similar condition to acuivalent query running in DB2: It is allowed to add: select ... from ... where ... fetch first 500 rows only (where there is fixed number of rows) but not: select ... from ... where ... fetch first :1 rows only (:1 == @MaximumRecords) Is someone aware of a solution/work-around to this problem?

    Read the article

  • Adding values from different tables

    - by damdeok
    Friends, I have these tables: Contestant Table: Winner Peter Group Table: Id Name Score Union 1 Bryan 3 77 2 Mary 1 20 3 Peter 5 77 4 Joseph 2 25 5 John 6 77 I want to give additional score of 5 to Peter on Group Table. So, I came up with this query. UPDATE Group SET Score = Score+5 FROM Contestant, Group WHERE Contestant.Winner = Group.Name Now, I want also to give additional score of 5 to the same Union as Peter which is 77. How can I integrate it as one query to my existing query?

    Read the article

  • how to get external variable value in dtsx package.

    - by Rishabh
    Hi, I am executing .dtsx package from c#, it was executing fine, if i am passing one variable value from c# code then how can i get it on .dtsx package for my ole db source query. Here is my c# code. string file = @"D:\CYNCZFuzzy\CYNCZFuzzy\Contact.dtsx"; package = app.LoadPackage(file, null); Variables vars = package.Variables; vars["User::parentContactID"].Value = 1028203; pkgResults = package.Execute(); string result = pkgResults.ToString(); I need this 1028203 value on my ole db source query, here my query. select cr.MasterContactID as ParentContactID, c.ID,C.FirstName, C.MiddleName, c.LastName, c.ID as FieldID from Contact c inner join ContactRelation cr on cr.SlaveContactID = c.ID where RelationshipID = 1 AND cr.MasterContactID = ? what I should write on ? for getting 1028203 value from c# page. Thanks in advance...

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • constructing dynamic In Statements with sql

    - by nitroxn
    Suppose we need to check three boolean conditions to perform a select query. Let the three flags be 'A', 'B' and 'C'. If all of the three flags are set to '1' then the query to be generated is SELECT * FROM Food WHERE Name In ('Apple, 'Biscuit', 'Chocolate'); If only the flags 'A' and 'B' are set to '1' with C set to '0'. Then the following query is generated. SELECT * FROM Food WHERE Name In ('Apple, 'Biscuit'); What is the best way to do it?

    Read the article

  • LINQ - Linq to Sql - Specified cast is not valid - Please Help!

    - by thiag0
    I am trying to do the following... Request request = ( from r in db.Requests where r.Status == "Processing" && r.Locked == false select r).SingleOrDefault(); It is throwing the following exception... Message: Specified cast is not valid. StackTrace: at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataQuery`1.System.Linq.IQueryProvider.Execute[S](Expression expression) at System.Linq.Queryable.SingleOrDefault[TSource](IQueryable`1 source) at GDRequestProcessor.Worker.GetNextRequest() The .DBML file schema matches my database table that I am trying to select from so I have no clue why I am having this problem. Can anyone help me? Thanks in advance!

    Read the article

  • Create LINQ to entities OrderBy expression on the fly

    - by AyKarsi
    I'm trying to add the orderby expression on the fly. But when the query below is executed I get the following exception: System.NotSupportedException: Unable to create a constant value of type 'Closure type'. Only primitive types ('such as Int32, String, and Guid') are supported in this context. The strange thing is, I am query exactly those primitive types only. string sortBy = HttpContext.Current.Request.QueryString["sidx"]; ParameterExpression prm = Expression.Parameter(typeof(buskerPosting), "posting"); Expression orderByProperty = Expression.Property(prm, sortBy); // get the paged records IQueryable<PostingListItemDto> query = (from posting in be.buskerPosting where posting.buskerAccount.cmsMember.nodeId == m.Id orderby orderByProperty //orderby posting.Created select new PostingListItemDto { Set = posting }).Skip<PostingListItemDto>((page - 1) * pageSize).Take<PostingListItemDto>(pageSize); Hope somebody can shed some light on this!

    Read the article

  • USing Min/Max with conditional operator

    - by user638501
    Hello All, I am trying to run a query to find max and min values, and then use a conditional operator. however when I try to run the following query, it gives me error - "misuse of aggregate: min()". My query is: SELECT a.prim_id, min(b.new_len*36) as min_new_len, max(b.new_len*36) as max_new_len FROM tb_first a, tb_second b WHERE a.sec_id = b.sec_id AND min_new_len > 1900 AND max_new_len < 75000 GROUP BY a.prim_id ORDER BY avg(b.new_len*36); Any suggestions ?

    Read the article

  • Inline Conditional Statement in Stored Procedure

    - by Jason
    Here is the pseudo-code for my inline query in my code: select columnOne from myTable where columnOne = '#variableOne#' if len(variableTwo) gt 0 and columnTwo = '#variableTwo#' end I would like to move this into a stored procedure but am having trouble building the query correctly. I assume it would be something like select columnOne from myTable where columnOne = @variableOne CASE WHEN len(@variableTwo) <> 0 THEN and columnTwo = @variableTwo END This is giving me a syntax error. Could someone tell me what I've got wrong. Also, I would like to keep it to only one query and not just have one if statement. Also, I do not want to build the sql in the stored procedure and run Exec() on it.

    Read the article

  • PHP: codecomments inside functions prevents it work

    - by Karem
    $query = $connect->prepare("SELECT firstname, lastname FROM users WHERE id = '$id'"); $query->execute(); $row = $query->fetch(); // $full_name = $row["firstname"] . " ".$row["lastname"]; $full_name = $row["firstname"] . " ".substr($row["lastname"], 0, 1)."."; return $full_name; If i remove the line that is a comment ( // ), it will return $full_name, if its there then it wont work. I also tried commenting with #, but it still wont work(wont return anything) as soon as there is a codecomment weird issue

    Read the article

  • select from varchar2 column with numeric value sometimes gives invalid number error

    - by Rene
    I'm trying to understand why, on some systems, I get an invalid number error message when I'm trying to select a value from a varchar2 column while on other systems I don't get the error while doing the exact same thing. The table is something like this: ID Column_1 Column_2 1 V text 2 D 1 3 D 2 4 D 3 and a query: select ID from table where column_1='D' and column_2 = :some_number_value :some_number_value is always numeric but can be null. We've fixed the query: select ID from table where column_1='D' and column_2 = to_char(:some_number_value) This original query runs fine on most systems but on some systems gives an "invalid number" error. The question is why? Why does it work on most systems and not on some?

    Read the article

  • How do I pass a DBNull value to a parameterized SELECT statement?

    - by Dan
    I have a SQL statement in C# (.NET Framework 4 running against SQL Server 2k8) that looks like this: SELECT [Column1] FROM [Table1] WHERE [Column2] = @Column2 The above query works fine with the following ADO.NET code: DbParameter parm = Factory.CreateDbParameter(); parm.Value = "SomeValue"; parm.ParameterName = "@Column2"; //etc... This query returns zero rows, though, if I assign DBNull.Value to the DbParameter's Value member even if there are null values in Column2. If I change the query to accommodate the null test specifically: SELECT [Column1] FROM [Table1] WHERE [Column2] IS @Column2 I get an "Incorrect syntax near '@Column2'" exception at runtime. Is there no way that I can use null or DBNull as a parameter in the WHERE clause of a SELECT statement?

    Read the article

  • How to retrieve Google Blogger feed in ASP.NET medium trust?

    - by ChrisP
    I have an ASP.NET web site hosted at HostMySite.com and they recently changed the shared accounts to run in medium trust. In my web site I query my Blogger account and get blog posts to display on my web site. I am using Google.GData.Client v1.4.0.2 The retrieval works locally (and worked until medium trust was invoked at the ISP). Now I receive the following error: [SecurityException: Request for the permission of type 'System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +58 System.Net.HttpWebRequest..ctor(Uri uri, ServicePoint servicePoint) +147 System.Net.HttpRequestCreator.Create(Uri Uri) +26 System.Net.WebRequest.Create(Uri requestUri, Boolean useUriBase) +216 System.Net.WebRequest.Create(Uri requestUri) +31 Google.GData.Client.GDataRequest.EnsureWebRequest() +77 Google.GData.Client.GDataRequest.Execute() +42 Google.GData.Client.Service.Query(Uri queryUri, DateTime ifModifiedSince, String etag, Int64& contentLength) +193 Google.GData.Client.Service.Query(FeedQuery feedQuery) +202 I've search the Google documentation and on-line but have not been able to find out what I need to change. Thanks

    Read the article

  • unsetting application role in classic ASP

    - by user303526
    Hi, I'm trying to unset an application role but have been failing miserably. I was able to get the cookie value after setting (sp_setapprole) the application role. But I haven't been able to use that cookie (type varbinary / byte array) in my query to unset using sp_unsetapprole. If it was any other stored procedure it wouldn't have been a problem. I was able to use Command object and create a parameter which takes data type input of adVarBinary (204) and execute the command line.. but to the Server the query goes as below. exec sp_executesql N'sp_unsetapprole @P1 ',N'@P1 varbinary(36)',0x01000000CD11697F8F0ED3627BC1DAD25FB9CEB3A2EC5B289C658235E510CD9F29230000 Since sp_setapprole and sp_unsetapprole have to be run ad hoc, the sql server is failing to run this line. And I'm finding it hard to append varbinary cookie value to a simple query such as 'sp_unsetapprole ' & varKookie so it runs "directly" on to the server. Any kind of suggestions are welcome. Thanks, Nandagopal

    Read the article

  • How to get null when use head funtion with a empty list in cypher?

    - by PeaceMaker
    I have a cypher query like this. START dep=node:cities(city_code = "JGS"), arr=node:cities(city_code = "XMN") MATCH dep-[way:BRANCH2BRANCH_AIRWAY*0..1]->()-->arr RETURN length(way), transfer.city_code, extract(w in way: w.min_consume_time) AS consumeTime The relationship named "way" is a optional one, so the property named "consumeTime" will be a empty list when the relationship "way" not exsit. The query result is: | 0 | "JGS" | [] | | 1 | "SZX" | [3600] | When I want to use the head function with the property "consumeTime", it return a error "Invalid query: head of empty list". How can I get a result like this? | 0 | "JGS" | null | | 1 | "SZX" | 3600 |

    Read the article

  • Perl : localtime with print

    - by kiruthika
    Hi all, I have used the following statements to get the current time. print "$query executed successfully at ",localtime; print "$query executed successfully at ",(localtime); print "$query executed successfully at ".(localtime); Output executed successfully at 355516731103960 executed successfully at 355516731103960 executed successfully at Wed Apr 7 16:55:35 2010 The first two statements are not printing the current time in a date format. Third statement only giving the correct output in a date format. My understanding is the first one is returning a value in scalar context,so it is returning numbers. Then in the second print I used localtime in list context only,why it's also giving number output.

    Read the article

  • Using "CASE" in Where clause to choose various column harm the performance

    - by zivgabo
    I have query which needs to be dynamic on some of the columns, meaning I get a parameter and according its value I decide which column to fetch in my Where clause. I've implemented this request using "CASE" expression: (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) >= DATEADD(mi, -@TZOffsetInMins, @sTime) AND (CASE @isArrivalTime WHEN 1 THEN ArrivalTime ELSE PickedupTime END) < DATEADD(mi, -@TZOffsetInMins, @fTime) If @isArrivalTime = 1 then chose ArrivalTime column else chose PickedupTime column. I have a clustered index on ArrivalTime and nonclustered index on PickedupTime. I've noticed that when I'm using this query (with @isArrivalTime = 1), my performance is a lot worse comparing to only using ArrivalTime. Maybe the query optimizer can't use\choose the index properly in this way? I compared the execution plans an noticed that when I'm using the CASE 32% of the time is being wasted on the index scan, but when I didn't use the CASE(just usedArrivalTime`) only 3% were wasted on this index scan. Anyone know the reason for this?

    Read the article

  • MySQL: Get only count of result set.

    - by Varun
    I am using MVC with PHP/MySQL. Suppose I am using 10 functions with different queries for fetching details from the database. But at other times I may want to get only the count of the result that will be returned by the query. What is the standard way to handle such situation. Should I write 10 more functions which duplicate almost whole query and return only the count. Or Should I always return the count also with the result set Or I can pass a flag to indicate that the function should return count only, and then based on the flag I will dynamically generate the (select part of) query. Or Is there a better way?

    Read the article

  • After MS Access Conversion 97 --> 2002 I get 'Enter Parameter Value' when exitting a form.

    - by user270370
    Hi, So when I exit a form from my newly converted .mdb it asks to Enter Parameter Value. It goes through (ie if i enter a value, it asks for another) the values required for a query that is run on a List Box on the page. The query has not been changed during the conversion. The values it is getting for the query are from text boxes on the same form. There are a few Requeries in the form (run from VB) so I imagine that it is rerunning again on Exit (although this isnt explicit in the form properties). I'm not quite sure how to go about solving this. Your help would be great. Thanks

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >