Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 530/1506 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • SQL Server pivots? some way to set column names to values within a row

    - by ccsimpson3
    I am building a system of multiple trackers that are going to use a lot of the same columns so there is a table for the trackers, the tracker columns, then a cross reference for which columns go with which tracker, when a user inserts a tracker row the different column values are stored in multiple rows that share the same record id and store both the value and the name of the particular column. I need to find a way to dynamically change the column name of the value to be the column name that is stored in the same row. i.e. id | value | name ------------------ 23 | red | color 23 | fast | speed needs to look like this. id | color | speed ------------------ 23 | red | fast Any help is greatly appreciated, thank you.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • How do I write an Oracle SQL query for this tricky question?

    - by atrueguy
    Here is the table data with the column name as Ships. +--------------+ Ships | +--------------+ Duke of north | ---------------+ Prince of Wales| ---------------+ Baltic | ---------------+ Replace all characters between the first and the last spaces (excluding these spaces) by symbols of an asterisk (*). The number of asterisks must be equal to number of replaced characters.

    Read the article

  • Integrated Windows authentication in IIS causing ADO.NET failure

    - by TrueWill
    We have a .NET 3.5 Web Service running under IIS. It must use identity impersonate="true" and Integrated Windows authentication in order to authenticate to third-party software. In addition, it connects to a SQL Server database using ADO.NET and SQL Server Authentication (specifying a fixed User ID and Password in the connection string). Everything worked fine until the database was moved to another SQL Server. Then the Web Service would throw the following exception: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) This error only occurs if identity impersonate is true in the Web.config. Again, the connection string hasn't changed and it specifies the user. I have tested the connection string and it works, both under the impersonated account and under the service account (and from both the remote machine and the server). What needs to be changed to get this to work with impersonation?

    Read the article

  • Can I make an identity field span multiple tables in SQL Server?

    - by johnnycakes
    Can I have an "identity" (unique, non-repeating) column span multiple tables? For example, let's say I have two tables: Books and Authors. Authors AuthorID AuthorName Books BookID BookTitle The BookID column and the AuthorID column are identity columns. I want the identity part to span both columns. So, if there is an AuthorID with a value of 123, then there cannot be a BookID with a value of 123. And vice versa. I hope that makes sense. Is this possible? Thanks. Why do I want to do this? I am writing an APS.NET MVC app. I am creating a comment section. Authors can have comments. Books can have comments. I want to be able to pass an entity ID (a book ID or an author ID) to an action and have the action pull up all the corresponding comments. The action won't care if it's a book or an author or whatever. Sound reasonable?

    Read the article

  • How to correctly do SQL UPDATE with weighted subselect?

    - by luminarious
    I am probably trying to accomplish too much in a single query, but have I an sqlite database with badly formatted recipes. This returns a sorted list of recipes with relevance added: SELECT *, sum(relevance) FROM ( SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%milk%' UNION ALL SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%flour%' UNION ALL SELECT *,1 AS relevance FROM recipes WHERE ingredients LIKE '%sugar%' ) results GROUP BY recipeID ORDER BY sum(relevance) DESC; But I'm now stuck with a special case where I need to write the relevance value to a field on the same row as the recipe. I figured something along these lines: UPDATE recipes SET relevance=(SELECT sum(relevance) ...) But I have not been able to get this working yet. I will keep trying, but meanwhile please let me know how you would approach this?

    Read the article

  • O(log N) == O(1) - Why not?

    - by phoku
    Whenever I consider algorithms/data structures I tend to replace the log(N) parts by constants. Oh, I know log(N) diverges - but does it matter in real world applications? log(infinity) < 100 for all practical purposes. I am really curious for real world examples where this doesn't hold. To clarify: I understand O(f(N)) I am curious about real world examples where the asymptotic behaviour matters more than the constants of the actual performance. If log(N) can be replaced by a constant it still can be replaced by a constant in O( N log N). This question is for the sake of (a) entertainment and (b) to gather arguments to use if I run (again) into a controversy about the performance of a design.

    Read the article

  • How to use LINQ to SQL to create ranked search results?

    - by quakkels
    I am looking for a way to use l2s to return ranked result based on keywords. I would like to take a keyword and be able to search the table for that keyword using .contains(). The trick that I haven't been able to figure out is how to get a count of how many times that keyqord appears, and then .OrderByDescending() based on that count. So if i had some thing like: string keyword = "SomeKeyword"; IQueryable<Article> searchResults = from a in GenesisRepository.Article where a.Body.Contains(keyword) select a; What is the best way to order searchResults based on the number of times keyword appears in a.Body? Thanks for any help.

    Read the article

  • What is the best way to store categorical references in SQL tables?

    - by jlafay
    I'm wanting to store a wide array of categorical data in MySQL database tables. Let's say that for instance I want to to information on "widgets" and want to categorize attributes in certain ways, i.e. shape category. For instance, the widgets could be classified as: round, square, triangular, spherical, etc. Should these categories be stored within a table to reference them best from an application? Another possibility, I would imagine, would be to add a column to widgets that contained a shape column that contained a tiny int. That way my application could search shapes by that and then use a coordinating enum type that would map the shape int meanings. Which would be best? Or is there another solution that I'm not thinking of yet?

    Read the article

  • Can I use memcpy in C++ to copy classes that have no pointers or virtual functions

    - by Shane MacLaughlin
    Say I have a class, something like the following; class MyClass { public: MyClass(); int a,b,c; double x,y,z; }; #define PageSize 1000000 MyClass Array1[PageSize],Array2[PageSize]; If my class has not pointers or virtual methods, is it safe to use the following? memcpy(Array1,Array2,PageSize*sizeof(MyClass)); The reason I ask, is that I'm dealing with very large collections of paged data, as decribed here, where performance is critical, and memcpy offers significant performance advantages over iterative assignment. I suspect it should be ok, as the 'this' pointer is an implicit parameter rather than anything stored, but are there any other hidden nasties I should be aware of?

    Read the article

  • Multiple SQL Standard Instances on 4 Processor/32-core Server

    - by Theowood
    We have a large 4 processor/32-core server with 192GB of memory available in the data center and over twenty small SQL Standard databases to consolidate. They are a mix of SQL 2012 and 2008 R2 for 3rd-party apps. Is there any issue with simply installing two instances of SQL Standard on the server - one for 2012 and one for 2008 R2 ? Each instance will use up to 64GB out of the 192GB and 16 cores. If we did this with Enterprise, the licensing would be a fortune and the Enterprise features are not needed.

    Read the article

  • Would it be simply better to use the system's functions rather than use the language?

    - by Nullw0rm
    There are many scenarios where I've questioned PHP's performance with some of its functions, and whether I should build a complex class to handle specific things using its seemingly slow tools. For example, Complex regular expressions with sed and processing with awk would seemingly be exponential in performance rather than making PHP's regular expression and seemingly excessive functions parse and in time manage to finish it. If I were to do a lot of network tasks such as MX lookups/DIGging/retrieving simultaneously I would rather pass it via system() and let the OS handle it itself. There are simply too many functions in PHP, that are inefficient and result in slow pages or can be handled easier by the OS. What are your opinions? Do you think I should do the hard work with the OS in its own/custom functions?

    Read the article

  • odp.net SQL query retrieve set of rows from two input arrays.

    - by Karl Trumstedt
    I have a table with a primary key consisting of two columns. I want to retrieve a set of rows based on two input arrays, each corresponding to one primary key column. select pkt1.id, pkt1.id2, ... from PrimaryKeyTable pkt1, table(:1) t1, table(:2) t2 where pkt1.id = t1.column_value and pkt1.id2 = t2.column_value I then bind the values with two int[] in odp.net. This returns all different combinations of my resulting rows. So if I am expecting 13 rows I receive 169 rows (13*13). The problem is that each value in t1 and t2 should be linked. Value t1[4] should be used with t2[4] and not all the different values in t2. Using distinct solves my problem, but I'm wondering if my approach is wrong. Anyone have any pointers on how to solve this the best way? One way might be to use a for-loop accessing each index in t1 and t2 sequentially, but I wonder what will be more efficient. Edit: actually distinct won't solve my problem, it just did it based on my input-values (all values in t2 = 0)

    Read the article

  • Can I raise System Error in sql Server in a stored procedure.

    - by Shantanu Gupta
    I am writing a stored procedure where i m using try catch block. Now i have a unique column in a table. When i try to insert duplicate value it throws exception with exception no 2627. I want this to be done like this if (exists(select * from tblABC where col1='value')=true) raiseError(2627)--raise system error that would have thrown if i would have used insert query to insert duplicate value And which method will be better, using insert query or checking for duplicate value before insertion using Select query ?

    Read the article

  • get count from Iqueryable<T> in linq-to-sql?

    - by Pandiya Chendur
    The following code doesn't seem to get the correct count..... var materials = consRepository.FindAllMaterials().AsQueryable(); int count = materials.Count(); Is it the way to do it.... Here is my repository which fetches records... public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id where m.Is_Deleted == 0 select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; }

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >