Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 361/1506 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • ????????PL/SQL ??????Tips

    - by Yusuke.Yamamoto
    ????? ??:2005/05/02 ??:?????? PL/SQL ????????????????·??????·???????????? PL/SQL ?? ?????????/ DBMS_OUTPUT?UTL_FILE?DBMS_SQL?DBMS_PIPE?DBMS_ALERT?DBMS_JOB?DBMS_UTILITY?DBMS_LOCK.SLEEP?DBMS_RANDOM?DBMS_OBFUSCATION_TOOLKIT?DBMS_CRYPTO?UTL_SMTP?UTL_MAIL ?????/?????? ?????? PL/SQL ????? PL/SQL ??????? ????????? ????????????????? http://otndnld.oracle.co.jp/tech/pl_sql/pdf/plsql_tips10g.pdf(10.1.0 ???) http://otndnld.oracle.co.jp/tech/db_connect/pdf/PLSQL_TECH.pdf(8.0.5 ???)

    Read the article

  • How to build a setup package for a desktop application using SQL CE 3.5 and Entity Framework?

    - by Emad
    I am having a WPF desktop application that uses SQL CE (compact edition 3.5 ) and using the Entity Framework as a Datalayer. As It turned out in (http://support.microsoft.com/default.aspx?scid=kb;en-us;958478&sd=rss&spid=2855 ) and (http://social.msdn.microsoft.com/Forums/en-US/sqlce/thread/b6bac277-cf66-4c74-a0b3-e48abedbd161/ ) There Is some problem with the Entity Framework and SQL CE and I had to get the hotfix (basically a new build of the SQL CE called 3.5.1) My problem now is how to build a setup package in order to make it work in case some users already had the SQL CE 3.5 installed on their machines? I have included the DLLs directly in the application but when the sql ce is installed, its DLLs are in the GAC and have precedence over the local ones and the application crashes. I need the build a setup that would work "even if" the user already had the old buggy version.

    Read the article

  • How to register System.DirectoryServices for use in SQL CLR User Functions?

    - by Saul Dolgin
    I am porting an old 32-bit COM component that was written in VB6 for the purpose of reading and writing to an Active Directory server. The new solution will be in C# and will use SQL CLR user functions. The assembly that I am trying to deploy to SQL Server contains a reference to System.DirectoryServices. The project does compile without any errors but I am unable to deploy the assembly to the SQL Server because of the following error: Error: Assembly 'system.directoryservices, version=2.0.0.0, culture=neutral, publickeytoken=b03f5f7f11d50a3a.' was not found in the SQL catalog. What are the correct steps for registering System.DirectoryServices on SQL Server?

    Read the article

  • Linq to SQL Azure genrating Error "Specified cast is not valid."

    - by Rabbi
    B"H I have an application that has been working for months using Linq to SQL connecting to a SQLExpress. I tried migrating it to SQL Azure. I copied the structure and data using the Sync Framework. I viewed the data in SQL Azure using SSMS 2008 R2 and it seams to be exactly what I have in my Sql Server. However when I try to use Linq to SQL against it I get an error "Specified cast is not valid." I seams to be happening any time I get child records. i.e. whenever I fill (the first time I access) an entity set. It seams to be happening after the data returns and when Linq tries to put it into the objects. remember, the application is working perfectly against sqlexpress, even when accessed across the internet or vpn.

    Read the article

  • Linq to SQL Azure generating Error "Specified cast is not valid."

    - by Rabbi
    B"H I have an application that has been working for months using Linq to SQL connecting to a SQLExpress. I tried migrating it to SQL Azure. I copied the structure and data using the Sync Framework. I viewed the data in SQL Azure using SSMS 2008 R2 and it seams to be exactly what I have in my Sql Server. However when I try to use Linq to SQL against it I get an error "Specified cast is not valid." I seams to be happening any time I get child records. i.e. whenever I fill (the first time I access) an entity set. It seams to be happening after the data returns and when Linq tries to put it into the objects. Remember, the application is working perfectly against sqlexpress, even when accessed across the internet or vpn.

    Read the article

  • In SQL Server, can multiple inserts be replaced with a single insert that takes an XML parameter?

    - by Mayo
    So I have an existing ASP.NET solution that uses LINQ-to-SQL to insert data into SQL Server (5 tables, 110k records total). I had read in the past that XML could be passed as a parameter to SQL Server but my google searches turn up results that store the XML directly into a table. I would rather take that XML parameter and insert the nodes as records. Is this possible? How is it done (i.e. how is the XML parameter used to insert records in T-SQL, how should the XML be formatted)? Note: I'm researching other options like SQL bulk copy and I know that SSIS would be a good alternative. I want to know if this XML approach is feasible.

    Read the article

  • How can I avoid setting some columns if others haven't changed, when working with Linq To SQL?

    - by Patrick Szalapski
    In LINQ to SQL, I want to avoid setting some columns if others haven't changed? Say I have dim row = (From c in dataContext.Customers Where c.Id = 1234 Select c).Single() row.Name = "Example" ' line 3 dataContext.SubmitChanges() ' line 4 Great, so LINQ to SQL fetches a row, sets the name to "Example" in memory, and generates an update SQL query only when necessary--that is, no SQL will be generated if the customer's name was already "Example". So suppose on line 3, I want to detect if row has changed, and if so, set row.UpdateDate = DateTime.Now. If row has not changed, I don't want to set row.UpdateDate so that no SQL is generated. Is there any good way to do this?

    Read the article

  • Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

    - by Chris Pebble
    I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name: SELECT t.est.* FROM test Instead of: SELECT test.* FROM test And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue. I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far. Any sql gurus know why SQL Server 2000 ran the query without issue? Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try: SELECT TOP 1000 dbota.ble.* FROM dbo.table

    Read the article

  • sql server 2008 express. cannot create mdf databse.

    - by yair
    Hi, I'm working with visual studio 2008 express and sql 2008 express. I'm trying to create mdf database (Add new Item ...)and I recieve the following error message: "a network-related or instance-specific error occured while establishing a connection to SQL Server. The server was not found or was not accessible. Verfy that instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named pipes Provider:40 - could not open connection to SQL)" Why ? Is it possible to create mdf database with SQL 2008 Express ? If it is then what to do ?

    Read the article

  • Very slow performance deserializing using datacontractserializer in a Silverlight Application.

    - by caryden
    Here is the situation: Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most. On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem. One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast. Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts? Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger

    Read the article

  • Using WIndows PowerShell 1.0 or 2.0 to evaluate performance of executable files.

    - by Andry
    Hello! I am writing a simple script on Windows PowerShell in order to evaluate performance of executable files. The important hypothesisi is the following: I have an executable file, it can be an application written in any possible language (.net and not, Viual-Prolog, C++, C, everything that can be compiled as an .exe file). I want to profile it getting execution times. I did this: Function Time-It { Param ([string]$ProgramPath, [string]$Arguments) $Watch = New-Object System.Diagnostics.Stopwatch $NsecPerTick = (1000 * 1000 * 1000) / [System.Diagnostics.Stopwatch]::Frequency Write-Output "Stopwatch created! NSecPerTick = $NsecPerTick" $Watch.Start() # Starts the timer [System.Diagnostics.Process]::Start($ProgramPath, $Arguments) $Watch.Stop() # Stops the timer # Collectiong timings $Ticks = $Watch.ElapsedTicks $NSecs = $Watch.ElapsedTicks * $NsecPerTick Write-Output "Program executed: time is: $Nsecs ns ($Ticks ticks)" } This function uses stopwatch. Well, the functoin accepts a program path, the stopwatch is started, the program run and the stopwatch then stopped. Problem: the System.Diagnostics.Process.Start is asynchronous and the next instruction (watch stopped) is not executed when the application finishes. A new process is created... I need to stop the timer once the program ends. I thought about the Process class, thicking it held some info regarding the execution times... not lucky... How to solve this?

    Read the article

  • C# performance methods of receiving data from a socket?

    - by Daniel
    Lets assume we have a simple internet socket, and its going to send 10 megabytes (because i want to ignore memory issues) of random data through. Is there any performance difference or a best practice method that one should use for receiving data? The final output data should be represented by a byte[]. Yes i know writing an arbitrary amount of data to memory is bad, and if I was downloading a large file i wouldn't be doing it like this. But for argument sake lets ignore that and assume its a smallish amount of data. I also realise that the bottleneck here is probably not the memory management but rather the socket receiving. I just want to know what would be the most efficient method of receiving data. A few dodgy ways can think of is: Have a List and a buffer, after the buffer is full, add it to the list and at the end list.ToArray() to get the byte[] Write the buffer to a memory stream, after its complete construct a byte[] of the stream.Length and read it all into it in order to get the byte[] output. Is there a more efficient/better way of doing this?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Why does the order of the loops affect performance when iterating over a 2D array? [closed]

    - by Mark
    Possible Duplicate: Which of these two for loops is more efficient in terms of time and cache performance Below are two programs that are almost identical except that I switched the i and j variables around. They both run in different amounts of time. Could someone explain why this happens? Version 1 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (i = 0; i < 4000; i++) { for (j = 0; j < 4000; j++) { x[j][i] = i + j; } } } Version 2 #include <stdio.h> #include <stdlib.h> main () { int i,j; static int x[4000][4000]; for (j = 0; j < 4000; j++) { for (i = 0; i < 4000; i++) { x[j][i] = i + j; } } }

    Read the article

  • C++ map performance - Linux (30 sec) vs Windows (30 mins) !!!

    - by sonofdelphi
    I need to process a list of files. The processing action should not be repeated for the same file. The code I am using for this is - using namespace std; vector<File*> gInputFileList; //Can contain duplicates, File has member sFilename map<string, File*> gProcessedFileList; //Using map to avoid linear search costs void processFile(File* pFile) { File* pProcessedFile = gProcessedFileList[pFile->sFilename]; if(pProcessedFile != NULL) return; //Already processed foo(pFile); //foo() is the action to do for each file gProcessedFileList[pFile->sFilename] = pFile; } void main() { size_t n= gInputFileList.size(); //Using array syntax (iterator syntax also gives identical performance) for(size_t i=0; i<n; i++){ processFile(gInputFileList[i]); } } The code works correctly, but... My problem is that when the input size is 1000, it takes 30 minutes - HALF AN HOUR - on Windows/Visual Studio 2008 Express (both Debug and Release builds). For the same input, it takes only 40 seconds to run on Linux/gcc! What could be the problem? The action foo() takes only a very short time to execute, when used separately. Should I be using something like vector::reserve for the map?

    Read the article

  • How costly performance-wise are these actions in iPhone objective-C?

    - by Alex Gosselin
    This is really a few questions in one, I'm wondering what the performance cost is for these things, as I haven't really been following a best practice of any sort for these. The answers may also be useful to other readers, if somebody knows these. (1) If I need the core data managed object context, is it bad to use #import "myAppDelegate.h" //farther down in the code: NSManagedObjectContext *context = [(myAppDelegate.h*)[[UIApplication sharedApplication] delegate] managedObjectContext]; as opposed to leaving the warning you get if you don't cast the delegate? (2) What is the cheapest way to hard-code a string? I have been using return @"myString"; on occasion in some functions where I need to pass it to a variety of places, is it better to do it this way: static NSString *str = @"myString"; return str; (3) How costly is it to subclass an object i wrote vs. making a new one, in general? (4) When I am using core data and navigating through a hierarchy of some sort, is it necessary to turn things back into faults somehow after I read some info from them? or is this done automatically? Thanks for any help.

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • ASP.NET Performance tip- Combine multiple script file into one request with script manager

    - by Jalpesh P. Vadgama
    We all need java script for our web application and we storing our JavaScript code in .js files. Now If we have more then .js file then our browser will create a new request for each .js file. Which is a little overhead in terms of performance. If you have very big enterprise application you will have so much over head for this. Asp.net Script Manager provides a feature to combine multiple JavaScript into one request but you must remember that this feature will be available only with .NET Framework 3.5 sp1 or higher versions.  Let’s take a simple example. I am having two javascript files Jscrip1.js and Jscript2.js both are having separate functions. //Jscript1.js function Task1() { alert('task1'); } Here is another one for another file. ////Jscript1.js function Task2() { alert('task2'); } Now I am adding script reference with script manager and using this function in my code like this. <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now Let’s test in Firefox with Lori plug-in which will show you how many request are made for this. Here is output of that. You can see 5 Requests are there. Now let’s do same thing in with ASP.NET Script Manager combined script feature. Like following <form id="form1" runat="server"> <asp:ScriptManager ID="myScriptManager" runat="server" > <CompositeScript> <Scripts> <asp:ScriptReference Path="~/JScript1.js" /> <asp:ScriptReference Path="~/JScript2.js" /> </Scripts> </CompositeScript> </asp:ScriptManager> <script language="javascript" type="text/javascript"> Task1(); Task2(); </script> </form> Now let’s run it and let’s see how many request are there like following. As you can see now we have only 4 request compare to 5 request earlier. So script manager combined multiple script into one request. So if you have lots of javascript files you can save your loading time with this with combining multiple script files into one request. Hope you liked it. Stay tuned for more!!!.. Happy programming.. Technorati Tags: ASP.NET,ScriptManager,Microsoft Ajax

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >