Search Results

Search found 37647 results on 1506 pages for 'sql performance'.

Page 168/1506 | < Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >

  • Understanding MySQL Query Caches and when to implement it?

    - by Jeff
    On our current MySQL server query cache is enabled. Qchache_hits: 31913 Qchache_inserts: 50959 Qchache_lowmem_prunes: 9320 Qchache_not_chached: 209320 Qchache_queries_in_chace: 986 com_update: 0 com_delete: 0 I do not fully understand the Query cache - I am reading about it currently and trying to understand it. Our database holds inventory data, customer data, employee data, sales data and so forth. The query is very rarely run more than once. The possibility of a query being run twice is viewing a specific sales information twice. But basically everything in our system changes constantly. It is always being updated, deleted, insterted and off the top of my head I can't picture users running the same query twice within a week. Do I even need to have the query cache enabled? I am guessing that the inserts means 51k entries have been added, but only 986 of those are being stored? Would an idea be to refresh the cache, and watch it for a week and check how many of the queries in cached are accessed maybe on a weekly basis to see if it is actually returning any benefits? Any help/guidance on this is appreciated, thanks

    Read the article

  • Sql statement return with zero result [closed]

    - by foodil
    I am trying to choose the row where 1)list.ispublic = 1 2)userlist.userid='aaa' AND userlist.listid=list.listid I need 1)+2) There is a row already but this statement can not get that row, is there any problem? List table: ListID ListName Creator IsRemindSub IsRemindUnSub IsPublic CreateDate LastModified Reminder 1 test2 aaa 0 0 1 2012-03-09 NULL NULL user_list table (No row): UserID ListID UserRights My test version SELECT l.*, ul.* FROM list l INNER JOIN user_list ul ON ul.ListID = l.ListID WHERE l.IsPublic = 1 AND ul.UserID = 'aaa' There is zero result. How can I fix that?

    Read the article

  • Advised auditing method for MS SQL to track changes made to a specific table by a specific user?

    - by scape
    What is the best method for tracking changes or logging the queries done to a table by a specific user when the person is using Management Studio? I'm using 2008 R2 Express Edition and want to specifically track a single user who logs in through Management studio and runs queries to make changes manually. I want to see what query was run and thus determine what was changed and how. I am not interested in restoring the information. I considered Change Tracking but read that it is not ideal for auditing as well I am unsure how to read the data, then I considered the Bulk-Logging option on the database however I then have to consider handling the log files which may grow huge as the database is used constantly by a web app. I am wondering if there is a more concise method to do what I want?

    Read the article

  • SQL Server Semantic Search to Find Text in External Files

    Sometimes it is necessary to search for specific content inside documents stored in a SQL Server database. Is it possible to do this in SQL Server? Can I run T-SQL queries and find content inside Microsoft Word files? Yes, now with SQL Server 2012 you can do a semantic search. 12 essential tools for database professionalsThe SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

    Read the article

  • Mimicking Network Databases in SQL

    Unlike the hierarchical database model, which created a tree structure in which to store data, the network model formed a generalized 'graph' structure that describes the relationships between the nodes. Nowadays, the relational model is used to solve the problems for which the network model was created, but the old 'network' solutions are still being implemented by programmers, even when they are less effective.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • Helloooooooooooo, is anybody here?

    - by The Official Microsoft IIS Site
    Well, to start off with, we got a great response to our PHP survey and …. But wait, first things first, let us start off with introductions. I joined the team recently as the Program Manager for this driver. I’ve worked on several technologies in my career and very excited to be a part of this great team and look forward to working with you all, the PHP developer community! PHP and its ecosystem is new to me, so I’ve been ramping up over the last few months as well as helping the team as best I can...(read more)

    Read the article

  • SQL Saturday #263 Manila, Phillipines

    SQL Saturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. This event will be held Nov 9 2013, admittance to this event is free, all costs are covered by donations and sponsorships. New! SQL Prompt 6 – now with tab historyWriting, exploring, and editing SQL just became even more effortless with SQL Prompt 6. Download a free trial.

    Read the article

  • MS SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • SQL Server 15MM rows, simple COUNT query. 15+ seconds?

    - by john
    We took over a website from another company after a client decided to switch. We have a table that grows by about 25k records a day, and is currently at 15MM records. The table looks something like: id (PK, int, not null) member_id (int, not null) another_id (int, not null) date (datetime, not null) SELECT COUNT(id) FROM tbl can take up to 15 seconds. A simple inner join on 'another_id' takes over 30 seconds. I can't imagine why this is taking so long. Any advice? SQL Server 2005 Express

    Read the article

  • SQL Server stored procedures - update column based on variable name..?

    - by ClarkeyBoy
    Hi, I have a data driven site with many stored procedures. What I want to eventually be able to do is to say something like: For Each @variable in sproc inputs UPDATE @TableName SET @variable.toString = @variable Next I would like it to be able to accept any number of arguments. It will basically loop through all of the inputs and update the column with the name of the variable with the value of the variable - for example column "Name" would be updated with the value of @Name. I would like to basically have one stored procedure for updating and one for creating. However to do this I will need to be able to convert the actual name of a variable, not the value, to a string. Question 1: Is it possible to do this in T-SQL, and if so how? Question 2: Are there any major drawbacks to using something like this (like performance or CPU usage)? I know if a value is not valid then it will only prevent the update involving that variable and any subsequent ones, but all the data is validated in the vb.net code anyway so will always be valid on submitting to the database, and I will ensure that only variables where the column exists are able to be submitted. Many thanks in advance, Regards, Richard Clarke Edit: I know about using SQL strings and the risk of SQL injection attacks - I studied this a bit in my dissertation a few weeks ago. Basically the website uses an object oriented architecture. There are many classes - for example Product - which have many "Attributes" (I created my own class called Attribute, which has properties such as DataField, Name and Value where DataField is used to get or update data, Name is displayed on the administration frontend when creating or updating a Product and the Value, which may be displayed on the customer frontend, is set by the administrator. DataField is the field I will be using in the "UPDATE Blah SET @Field = @Value". I know this is probably confusing but its really complicated to explain - I have a really good understanding of the entire system in my head but I cant put it into words easily. Basically the structure is set up such that no user will be able to change the value of DataField or Name, but they can change Value. I think if I were to use dynamic parameterised SQL strings there will therefore be no risk of SQL injection attacks. I mean basically loop through all the attributes so that it ends up like: UPDATE Products SET [Name] = '@Name', Description = '@Description', Display = @Display Then loop through all the attributes again and add the parameter values - this will have the same effect as using stored procedures, right?? I dont mind adding to the page load time since this is mainly going to affect the administration frontend, and will marginly affect the customer frontend.

    Read the article

  • Performance implications of Synchronous Sockets vs Asynchronous Sockets

    - by Akash Kava
    We are trying to build an SMTP Server to receive mail notifications from various clients over internet. As each of the communication will be longer and it needs to log everything, doing this Asynchronous way is little challenging as well as by using Socket's Asynchronous methods we are not sure of how flow of control and error handling happens. Previously we wrote lot of server/client apps but we always used Synchronous sockets, reason being they are longer sessions and each session also has lot of local data to manage and parsing messages etc. Does anyone have any experience over real performance differences between these two methods? Async calls use ThreadPool which we have experienced many times to just die for no reason. And we fail to restart threadpool etc. In one way Request-Response protocol of HTTP, Async Sockets makes sense, but SMTP/IMAP etc protocols are longer and they have interleaved messages plus state machine of server. So Async methods are really complicated to program. However if anyone can share the performance of Sockets, it will be helpful.

    Read the article

  • Visual C# 2008 Express connection to SQL Server 2008 Express problem

    - by Phil
    Hi guys, I have a problem with Visual C# 2008 express (SP1) connecting to SQL Server 2008 express. The "Add Connection" window (wherever initiated) doesn't list existing sql server and no option for sql server except a compact edition. Note that, I've got the VWD 2008 express (SP1) on the same machine which shows the window regularly (with SQL server listed) and SQL Server Management studio works fine with the server as well. I've seen other similar posts, did take some advices: reinstalled the VC#, services run ok, etc... but with no success with VC# so far. Again, on the same machine the VWD shows the dialog with sql server option regularly, but VC# shows only 3 options in "Change data source" dialog (1. Microsoft Access Database File (OLE DB) 2. Microsoft SQL Server Compact 3.5, 3. Microsoft SQL Server Database File) Any idea? Thanks in advice, Phil

    Read the article

  • WCF performance improvements

    - by Burt
    I am developing a WPF application that talks to a server via WCF services over the internet. After profiling the application I noticed a lot of time is being taking up by creating the appropriate WCF client proxy and making the call to the server. The code on the server is optimised and doesn't take any time to run yet I am still seeing a 1.5 second delay from when a service is invloked to it returning to the client. A few points to give a bit of background: I am using the ASP.Net membership for security I will eventually hook into the same server side code through a website I would eventually like to have offline support in the application I really need to nail the performance early though as if the app is taking a couple of seconds to come back it is too long for what I am trying to do. Can anyone suggest performance tips that will help me please?

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • MySQL index cardinality - performance vs storage efficiency

    - by Sean
    Say you have a MySQL 5.0 MyISAM table with 100 million rows, with one index (other than primary key) on two integer columns. From my admittedly poor understanding of B-tree structure, I believe that a lower cardinality means the storage efficiency of the index is better, because there are less parent nodes. Whereas a higher cardinality means less efficient storage, but faster read performance, because it has to navigate through less branches to get to whatever data it is looking for to narrow down the rows for the query. (Note - by "low" vs "high", I don't mean e.g. 1 million vs 99 million for a 100 million row table. I mean more like 90 million vs 95 million) Is my understanding correct? Related question - How does cardinality affect write performance?

    Read the article

  • How to maximize http.sys file upload performance

    - by anelson
    I'm building a tool that transfers very large streaming data sets (possibly on the order of terabytes in a single stream; routinely in the tens of gigabytes) from one server to another. The client portion of the tool will read blocks from the source disk, and send them over the network. The server side will read these blocks off the network and write them to a file on the server disk. Right now I'm trying to decide which transport to use. Options are raw TCP, and HTTP. I really, REALLY want to be able to use HTTP. The HttpListener (or WCF if I want to go that route) make it easy to plug in to the HTTP Server API (http.sys), and I can get things like authentication and SSL for free. The problem right now is performance. I wrote a simple test harness that sends 128K blocks of NULL bytes using the BeginWrite/EndWrite async I/O idiom, with async BeginRead/EndRead on the server side. I've modified this test harness so I can do this with either HTTP PUT operations via HttpWebRequest/HttpListener, or plain old socket writes using TcpClient/TcpListener. To rule out issues with network cards or network pathways, both the client and server are on one machine and communicate over localhost. On my 12-core Windows 2008 R2 test server, the TCP version of this test harness can push bytes at 450MB/s, with minimal CPU usage. On the same box, the HTTP version of the test harness runs between 130MB/s and 200MB/s depending upon how I tweak it. In both cases CPU usage is low, and the vast majority of what CPU usage there is is kernel time, so I'm pretty sure my usage of C# and the .NET runtime is not the bottleneck. The box has two 6-core Xeon X5650 processors, 24GB of single-ranked DDR3 RAM, and is used exclusively by me for my own performance testing. I already know about HTTP client tweaks like ServicePointManager.MaxServicePointIdleTime, ServicePointManager.DefaultConnectionLimit, ServicePointManager.Expect100Continue, and HttpWebRequest.AllowWriteStreamBuffering. Does anyone have any ideas for how I can get HTTP.sys performance beyond 200MB/s? Has anyone seen it perform this well on any environment?

    Read the article

  • Wpf: Tips for better performance

    - by viky
    I am working on a wpf application. In which I am working with a TreeView, each node represents different datatypes, these datatypes are having properties defined and using data template to show their properties. My application reads from xml and create tree accordingly. My problem is that when I load it, it is too slow, I want to know about the tricks that will help me to improve performance of my(any) wpf application. Edit: Please provide me some tips for better performance in wpf!! I am using wpf Profiler but it is not much helpful for me.

    Read the article

  • Performance Improvement: Alternative for array_flip function.

    - by Rachel
    Is there any way I can avoid using array_flip to optimize performance. I am doing a select statement from database, preparing the query and executing it and storing data as an associative array in $resultCollection and than I have array op and for each element in $resultCollection am storing its outputId in op[] as evident from the code. I have explained code and so my question is how can I achieve an similar alternative for array_flip with using array_flip as I want to improve performance. $resultCollection = $statement->fetchAll(PDO::FETCH_ASSOC); $op = array(); //Looping through result collection and storing unicaOfferId into op array. foreach ($resultCollection as $output) { $op[] = $output['outputId']; } //Here op array has key as 0, 1, 2...and value as id {which I am interested in} //Flip op array to get unica offer ids as key $op = array_flip($op); //Doing a flip to get id as key. foreach ($ft as $Id => $Off) { $ft[$Id]['is_set'] = isset($op[$Id]); }

    Read the article

  • Declarative JDOQL vs Single-String JDOQL : performance

    - by DrDro
    When querying with JDOQL is there a performance difference between using the declarative version and the Single-String version: Example from the JDOQL doc: //Declarative JDOQL : Query q = pm.newQuery(org.jpox.Person.class, "lastName == \"Jones\" && age < age_limit"); q.declareParameters("double age_limit"); List results = (List)q.execute(20.0); //Single-String JDOQL : Query q = pm.newQuery("SELECT FROM org.jpox.Person WHERE lastName == \"Jones\"" + " && age < :age_limit PARAMETERS double age_limit"); List results = (List)q.execute(20.0); Other then performance, are there any reasons for which one is better to use then the other or is it just about the one with which we feel more comfortable.

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173 174 175  | Next Page >