Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 679/1506 | < Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >

  • Is there any performance difference between Ubuntu Unity and Classic/Fallback?

    - by user48949
    is there any difference between using Ubuntu Unity and Ubunt Classic/Fallback? Just to be clear, I'm not talking about the Launcher or the Dash. Of course Ubuntu Classic/Fallback doesn't have the Launcher/Dash, but this is not the difference I'm talking about. I mean differences related to performance, features, functionalities, compatibilities, etc. These kinds of differences. I'm asking this because I've heard the Fallback Mode is kind of "incomplete" when it's compared to Gnome Shell or Ubuntu Unity, so I just wanted to know whether or not it's true, because if it's true, I don't think using Fallback Mode is worth it.

    Read the article

  • How to calculate maximum number of request in 128 MB VPS performance?

    - by ifdion
    I am a newbie here, please let me know if I'm using wrong webmaster terms. I am currently setting up a VPS for a multi site WordPress. The VPS uses Debian 6 LNMP setup and the DNS is being taken care by another service. Currently the VPS is running non multi site WordPress with -+ 83 MB RAM out of 128MB. As far as I know the performance is relative to the number of request, not the number of sites in the multi site setup. The question How do I calculate maximum number of request in with that setup? If the information is not enough, what other factor do I need to know? Thank you in advance.

    Read the article

  • ERROR_PROCEDURE Does Not Return a Schema Name

    "A recent blog entry I read reminded me again that I wanted to rant about an issue in SQL Server for quite some time now. SQL Server 2005 introduced the separation between user and schema. Though schemata already existed before SQL Server 2005, they really became usable with this version, imho. At the same time TRY...CATCH was a new way for structured error handling introduced. And so it finally became possible…" NEW! SQL Monitor 2.0Monitor SQL Server Central's servers withRed Gate's new SQL Monitor.No installation required. Find out more.

    Read the article

  • How Can I Disable Windows 7's Aero Performance Warnings?

    - by Jason Fitzpatrick
    You know your computer isn’t cutting edge, but there’s no need for Windows 7 to constantly remind you. Read on to see how you can disable its constant nagging to adjust your color scheme to improve performance. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • Why does Windows 7 overall performance is better than Ubuntu 11.10?

    - by user37805
    I have a i7 2600 processor, 8Gb DDR3 ram, nVidia GTX570, and Ubuntu takes 45-50 seconds to boot and 32-35 seconds to power off, while windows 7 boots in 20-25 seconds and shuts down in 10 seconds. Both OS with autologin enabled, and in dual boot. Ubuntu is slow with preload too, and doesn't show any boot splash after installing drivers and didn't recognize my nVidia graphics card on jockey GTK, I had to add x swat repository and that didn't worked. I installed proprietary drivers through terminal (nvidia-common, nvidia-settings) in order to have 3d acceleration. But it doesn't make any difference on the speed. I also have a Pentium 4 PC and ubuntu 11.10 is way faster than windows 7 or XP. Also with nvidia graphics card and preload. http://paste.ubuntu.com/924890/ there is my boot script, sorry but some words are in Spanish because my ubuntu is in Spanish. Not using WUBI, Ubuntu has its own partition, 64-bits, and Matlab 2011 has very low performance compared to windows version.

    Read the article

  • A hotfix is available that improves the performance of CLR when a .NET Framework 3.5 SP1-based appli

    981619 ... A hotfix is available that improves the performance of CLR when a .NET Framework 3.5 SP1-based application runs in a virtualized environmentThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What about the performance enhancement when using an SSD as the main disk?

    - by motumboe
    I'm planning to buy a new PC and I am thinking about using an SSD as the main disk. I'd also use a standard spinning disk and I'd mount it to /home. To the people already using such a setup: does this induce a practical, noticeable enhancement in performance? I think that access times, rather than transfer rates, are the more useful feature of SSD's. I would like to know if they have a noticeable effect on a desktop installation of Ubuntu. Thanks in advance!

    Read the article

  • OVH ouvre un Cloud pour les développeurs qui veulent passer au SaaS et se lance dans le calcul haute performance à la demande

    OVH ouvre un Cloud pour les développeurs qui veulent passer au SaaS Et se lance dans le calcul haute performance à la demande « Pour accompagner la mutation du marché des logiciels vers le SaaS, OVH.com fait évoluer son offre Private Cloud basée sur vSphere de VMware », voici résumée en une phrase la nouvelle offre de l'hébergeur nordiste. Une offre qui a pour particularité de ne pas mutualiser les ressources des serveurs, mais qui dédie chaque serveur physique à un client. Le but pour les éditeurs et les développeurs est de disposer d'un environnement dans lequel ils peuvent migrer leurs logiciels et les proposer en mode SaaS. « Au lieu d'installer le logiciel c...

    Read the article

  • .NET Compact Framework 3.9 : compatibilité avec VS 2012, gain de performance et support du multi-core pour l'outil

    .NET Compact Framework 3.9 sera compatible avec Visual Studio 2012 gain de performance et support du multi-coeur pour la version embarquée de .NET Microsoft a dévoilé la semaine dernière sa feuille de route pour l'ensemble de ses systèmes d'exploitation embarqués. L'éditeur prévoit de publier au second trimestre de l'année prochaine Windows Embedded Compact 2013, son OS destiné aux terminaux tactiles légers. Dans cette version, sera inclus le Framework .NET Compact (NETCF) 3.9, la prochaine mise à jour de la plateforme de développement pour l'embarqué. Pour rappel, .NET Framework Compact est une version du Framework .NET pour les périphériques embarqués. Il f...

    Read the article

  • Compiled Linq & String.Contains

    - by sharru
    i'm using linq-to-sql and i'm use complied linq for better performance. I have a users table with a INT field called "LookingFor" that can have the following values.1,2,3,12,123,13,23. I wrote a query to return the users based on the "lookingFor" column i want to return all users that contains the "lookingFor" value (not only those equal to it). In example if user.LookingFor = 12 , and query paramter is 1 this user should be selected. private static Func<NeDataContext, int, IQueryable<int>> MainSearchQuery = CompiledQuery.Compile((NeDataContext db, int lookingFor) => (from u in db.Users where (lookingFor == -1 ? true : u.LookingFor.ToString().Contains(lookingFor) select u.username); This WORKS on non complied linq but throws error when using complied. How do i fix it to work using complied linq? I get this error: Only arguments that can be evaluated on the client are supported for the String.Contains method.

    Read the article

  • Method 'SingleOrDefault' not supported by Linq to Entities

    - by user300992
    I read other posts on similar problem on using SingleOfDefault on Linq-To-Entity, some suggested using "First()" and some others suggested using "Extension" method to implement the Single(). This code throws exception: Movie movie = (from a in movies where a.MovieID == '12345' select a).SingleOrDefault(); If I convert the object query to a List using .ToList(), "SingleOrDefault()" actually works perfectly without throwing any error. My question is: Is it not good to convert to List? Is it going to be performance issue for more complicated queries? What does it get translated in SQL? Movie movie = (from a in movies.ToList() where a.MovieID == '12345' select a).SingleOrDefault();

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • SqlDataReader / DbDataReader implementation question

    - by Jose
    Does anyone know how DbDataReaders actually work. We can use SqlDataReader as an example. When you do the following cmd.CommandText = "SELECT * FROM Customers"; var rdr = cmd.ExecuteReader(); while(rdr.Read()) { //Do something } Does the data reader have all of the rows in memory, or does it just grab one, and then when Read is called, does it go to the db and grab the next one? It seems just bringing one into memory would be bad performance, but bringing all of them would make it take a while on the call to ExecuteReader. I know I'm the consumer of the object and it doesn't really matter how they implement it, but I'm just curious, and I think that I would probably spend a couple hours in Reflector to get an idea of what it's doing, so thought I'd ask someone that might know. I'm just curious if anyone has an idea.

    Read the article

  • Entity Framework, full-text search and temporary tables

    - by markus
    I have a LINQ-2-Entity query builder, nesting different kinds of Where clauses depending on a fairly complex search form. Works great so far. Now I need to use a SQL Server fulltext search index in some of my queries. Is there any chance to add the search term directly to the LINQ query, and have the score available as a selectable property? If not, I could write a stored procedure to load a list of all row IDs matching the full-text search criteria, and then use a LINQ-2-Entity query to load the detail data and evaluate other optional filter criteria in a loop per row. That would be of course a very bad idea performance-wise. Another option would be to use a stored procedure to insert all row IDs matching the full-text search into a temporary table, and then let the LINQ query join the temporary table. Question is: how to join a temporary table in a LINQ query, as it cannot be part of the entity model?

    Read the article

  • Add comma-separated value of grouped rows to existing query

    - by Peter Lang
    I've got a view for reports, that looks something like this: SELECT a.id, a.value1, a.value2, b.value1, /* (+50 more such columns)*/ FROM a JOIN b ON (b.id = a.b_id) JOIN c ON (c.id = b.c_id) LEFT JOIN d ON (d.id = b.d_id) LEFT JOIN e ON (e.id = d.e_id) /* (+10 more inner/left joins) */ It joins quite a few tables and returns lots of columns, but indexes are in place and performance is fine. Now I want to add another column to the result, showing comma-separated values ordered by value from table y outer joined via intersection table x if a.value3 IS NULL, else take a.value3 To comma-separate the grouped values I use Tom Kyte's stragg, could use COLLECT later. Pseudo-code for the SELECT would look like that: SELECT xx.id, COALESCE( a.value3, stragg( xx.val ) ) value3 FROM ( SELECT x.id, y.val FROM x WHERE x.a_id = a.id JOIN y ON ( y.id = x.y_id ) ORDER BY y.val ASC ) xx GROUP BY xx.id What is the best way to do it? Any tips?

    Read the article

  • Multiple rows with a single INSERT in SQLServer 2008

    - by Todd
    I am testing the speed of inserting multiple rows with a single INSERT statement. For example: INSERT INTO [MyTable] VALUES (5, 'dog'), (6, 'cat'), (3, 'fish) This is very fast until I pass 50 rows on a single statement, then the speed drops significantly. Inserting 10000 rows with batches of 50 take 0.9 seconds. Inserting 10000 rows with batches of 51 take 5.7 seconds. My question has two parts: Why is there such a hard performance drop at 50? Can I rely on this behavior and code my application to never send batches larger than 50? My tests were done in c++ and ADO.

    Read the article

  • Optimizing encrypted column search

    - by Sung Meister
    I have a table called,tblClient with an encrypted column called SSN. Due to company policy, we encrypted SSN using a symmetric key (chosen over asymmetric key due to performance reasons) using a password. Here is a partial LIKE search on SSN declare @SSN varchar(11) set @SSN = '111-22-%' open symmetric key SSN_KEY decrypt by password = 'secret' select Client_ID from tblClient (nolock) where convert(nvarchar(11), DECRYPTBYKEY(SSN)) like @SSN close symmetric key SSN_KEY Before encryption, searching thru 150,000 records took less than 1 second. but with the mix of decryption, the same search takes around 5 seconds. What strategy can I apply to try to optimize searching thru encrypted column?

    Read the article

  • DbDataReader with DbTransactions

    - by Gustavo Paulillo
    Its the wrong way or lack of performance, using DbDataReader combinated with DbTransactions? An example of code: public DbDataReader ExecuteReader() { try { if (this._baseConnection.State == ConnectionState.Closed) this._baseConnection.Open(); if (this._baseCommand.Transaction != null) return this._baseCommand.ExecuteReader(); return this._baseCommand.ExecuteReader(CommandBehavior.CloseConnection); } catch (Exception excp) { if (this._baseCommand.Transaction != null) this._baseCommand.Transaction.Rollback(); this._baseCommand.CommandText = string.Empty; this._baseConnection.Close(); throw new Exception(excp.Message); } } Some methods call this operation. Sometimes openning a DbTransaction. Its using DbConnection and DbCommand. The real problem, is in production enviroment (like 5,000 access/day) the ADO operations start throwing exceptions

    Read the article

  • Tables with no Primary Key

    - by Matt Hamilton
    I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way?

    Read the article

  • Should I commit or rollback a transaction that creates a temp table, reads, then deletes it?

    - by Triynko
    To select information related to a list of hundreds of IDs... rather than make a huge select statement, I create temp table, insert the ids into it, join it with a table to select the rows matching the IDs, then delete the temp table. So this is essentially a read operation, with no permanent changes made to any persistent database tables. I do this in a transaction, to ensure the temp table is deleted when I'm finished. My question is... what happens when I commit such a transaction vs. let it roll it back? Performance-wise... does the DB engine have to do more work to roll back the transaction vs committing it? Is there even a difference since the only modifications are done to a temp table? Related question here, but doesn't answer my specific case involving temp tables: http://stackoverflow.com/questions/309834/should-i-commit-or-rollback-a-read-transaction

    Read the article

  • iPhone: Best Method for Passing Data to and from a Server

    - by SAPNA
    I am developing an iPhone application that downloads data from a website. The website database is implemented in SQL and the site itself uses the classic ASP interface. I am unsure as to which method would be best for transferring data to and from the server. Both JSON and SOAP require XML processing and I'm not sure how that affects performance or which of those two is best. What would be the best method in general for data transfer given the server configuration we currently have? I very new to this field and I'm a bit confused. Any help would be appreciated.

    Read the article

  • MSSQL Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using MS SQL 2008 currently.

    Read the article

  • MySqlDataAdapter or MySqlDataReader for bulk transfer?

    - by Jeff Meatball Yang
    I'm using the MySql connector for .NET to copy data from MySql servers to SQL Server 2008. Has anyone experienced better performance using one of the following, versus the other? DataAdapter and calling Fill to a DataTable in chunks of 500 DataReader.Read to a DataTable in a loop of 500 I am then using SqlBulkCopy to load the 500 DataTable rows, then continue looping until the MySql record set is completely transferred. I am primarily concerned with using a reasonable amount of memory and completing in a short amount of time. Any help would be appreciated!

    Read the article

  • Oracle: '= ANY()' vs. 'IN ()'

    - by eidylon
    Hi all, I just stumbled upon something in ORACLE SQL (not sure if it's in others), that I am curious about. I am asking here as a wiki, since it's hard to try to search symbols in google... I just found that when checking a value against a set of values you can do WHERE x = ANY (a, b, c) As opposed to the usual WHERE x IN (a, b, c) So I'm curious, what is the reasoning for these two syntaxes? Is one standard and one some oddball Oracle syntax? Or are they both standard? And is there a preference of one over the other for performance reasons, or ? Just curious what anyone can tell me about that '= ANY' syntax. CheerZ!

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683 684 685 686  | Next Page >