Search Results

Search found 51747 results on 2070 pages for 'oracle database in memory'.

Page 722/2070 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • Logic in the db for maintaining a points system relationship?

    - by MarcusBooster
    I'm making a little web based game and need to determine where to put logic that checks the integrity of some underlying data in the sql database. Each user keeps track of points assigned to him, and points are awarded by various tasks. I keep a record of each task transaction to make sure they're not repeated, and to keep track of the value of the task at the time of completion, since an individual award level my fluctuate over time. My schema looks like this so far: create table player ( player_ID serial primary key, player_Points int not null default 0 ); create table task ( task_ID serial primary key, task_PointsAwarded int not null ); create table task_list ( player_ID int references player(player_ID), task_ID int references task(task_ID), when_completed timestamp default current_timestamp, point_value int not null, --not fk because task value may change later constraint pk_player_task_id primary key (player_ID, task_ID) ); So, the player.player_Points should be the total of all his cumulative task points in the task_list. Now where do I put the logic to enforce this? Should I do away with player.player_Points altogether and do queries every time I want to know the total score? Which seems wasteful since I'll be doing that query a lot over the course of a game. Or, put a trigger in the task_list that automatically updates the player.player_Points? Is that too much logic to have in the database and should just maintain this relationship in the application? Thanks.

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

  • How to store and synchronize a big list of strings

    - by Joel
    I have a large database table in SQLExpress on Windows, with a particular field of interest 'code'. I have an Apache web server with MySQL on Linux. The web application on the Linux box needs access to the list of all codes. The only thing it will use the list for is checking for the existence of a given code. Having the Linux server call out to the Windows server is impractical as the Windows server is behind a NAT'ed office internet connection, and it may not always be accessible. I have set it so the Windows server will push the list of codes to the web server by means of a simple HTTP POST request. However, at this point I have not implemented the storage of the codes on the Linux box. Should I store them in a MySQL table with a single field 'code'? Then I get fast indexed lookups O(1), however I think synchronization will be an issue - given an updated list of codes, pushed from the Windows box, how would I optimally synchronize the list with the database? TRUNCATE, followed by INSERT? Should I instead store them in a flat file? Then I have O(n) look up time rather than O(1). Additionally an extra constant-time overhead too, as I will be processing the file in Ruby. However, synchronization is easy - simply replace the file.

    Read the article

  • Logback DBAppender url

    - by pedro mendes
    I'm trying to use Logback's DBAppender. My logback.xml has the following appender: </appender> <appender name="DatabaseAppender" class="ch.qos.logback.classic.db.DBAppender"> <connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource"> <driverClass>oracle.jdbc.OracleDriver</driverClass> <url>jdbc:oracle:thin:@URL:PORT:SERVICEID</url> <user>USER</user> <password>PASS</password> </connectionSource> </appender> the url given works with other java classes in the same project but it fails with logback giving the following error ORA-00904: "ARG3": invalid identifier at java.sql.SQLException: ORA-00904: "ARG3": invalid identifier where ARG3 is the <url>jdbc:oracle:thin:@URL:PORT:SERVICEID</url>

    Read the article

  • Clob as param for PL/SQL Java Stored Procedure

    - by JDS
    I have a java stored procedure that takes in a clob representing a chunk of javascript and mins it. The structure of the function calling the JSP is as follows: function MIN_JS(pcl_js in clob) return clob as language java name 'JSMin.min(oracle.sql.CLOB) return oracle.sql.CLOB'; In the actual JSP, I have the following: import oracle.sql.CLOB; public class JSMin { ... public static min(CLOB js) { ... } The problem I'm having is that whenever I pass a clob to JS_MIN, it is always interpreted as null inside the JSP. I've checked the clob before calling JS_MIN annd it definitely has contents. Any ideas as to what I'm missing? Any help is greatly appreciated.

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • Range partition skip check

    - by user289429
    We have large amount of data partitioned on year value using range partition in oracle. We have used range partition but each partition contains data only for one year. When we write a query targeting a specific year, oracle fetches the information from that partition but still checks if the year is what we have specified. Since this year column is not part of the index it fetches the year from table and compares it. We have seen that any time the query goes to fetch table data it is getting too slow. Can we somehow avoid oracle comparing the year values since we for sure know that the partition contains information for only one year.

    Read the article

  • Metadata requirements for developers

    - by Paul James
    I'm tasked with providing a list of metadata requirements our data warehouse developers might need. This is not the business metadata (nice descriptions etc), but rather data required for change management (also known as impact assesment), data lineage etc. I've seen this article Meta Meta Data Data - Ralph Kimball but as I'm not the first person to do this I'm throwing it to the SO community. The actual question is this: What metadata do datawarehouse developers require to design, develop and manage change in ETL routines? PS: I'm trying to keep the answer platform agnostic but for context this is an Oracle database with PL/SQL and Datastage.

    Read the article

  • Pros and cons of sorting data in DB?

    - by Roman
    Let's assume I have a table with field of type VARCHAR. And I need to get data from that table sorted alphabetically by that field. What is the best way (for performance): add sort by field to the SQL-query or sort the data when it's already fetched? I'm using Java (with Hibernate), but I can't tell anything about DB engine. It could be any popular relational database (like MySQL or MS Sql Server or Oracle or HSQL DB or any other). The amount of records in table can vary greatly but let's assume there are 5k records.

    Read the article

  • What programming language is used to design google algorithm?

    - by AKN
    It is known that google has best searching & indexing algorithm. The also have good relevancy. They are also quicker in getting down the latest results. All that's fine. What programming language (c, c++, java, etc...) & database (oracle, MySQL, etc...) they have used in achieving this. Since they have to manipulate with volume of data quickly and effectively. Though I'm not looking for their indepth architecture (if in case violates their company policies) an overview of all such things could be useful. Anybody please add you valuable suggestions and insight on this?

    Read the article

  • How to index a date column with null values?

    - by Heinz Z.
    How should I index a date column when some rows has null values? We have to select rows between a date range and rows with null dates. We use Oracle 9.2 and higher. Options I found Using a bitmap index on the date column Using an index on date column and an index on a state field which value is 1 when the date is null Using an index on date column and an other granted not null column My thoughts to the options are: to 1: the table have to many different values to use an bitmap index to 2: I have to add an field only for this purpose and to change the query when I want to retrieve the null date rows to 3: locks tricky to add an field to an index which is not really needed What is the best practice for this case? Thanks in advance Some infos I have read: Oracle Date Index When does Oracle index null column values?

    Read the article

  • Silverlight Isolated Storage and loading big files

    - by Thomas Joulin
    In a Windows Phone 7 application, I would like to query a big XML file (list of cities) stored using Isolated Storage. If I do that this way, will the file be loaded to memory ( 5 mo) ? If so, what other solution do I have? Edit: More details. I want to use AutoCompleteBox (http://www.jeff.wilcox.name/2008/10/introducing-autocompletebox/), but instead of using a web service (this is fixed data, no need to be online), I want to query a file/database/isolated storage... I have a fixed list of cities. I said in the comments it's 40k, but it finally seems closer to 1k rows.

    Read the article

  • Implementing Transparent Persistence

    - by Jules
    Transparent persistence allows you to use regular objects instead of a database. The objects are automatically read from and written to disk. Examples of such systems are Gemstone and Rucksack (for common lisp). Simplified version of what they do: if you access foo.bar and bar is not in memory, it gets loaded from disk. If you do foo.bar = baz then the foo object gets updated on disk. Most systems also have some form of transactions, and they may have support for sharing objects across programs and even across a network. My question is what are the different techniques for implementing these kind of systems and what are the trade offs between these implementation approaches?

    Read the article

  • Handling null values with PowerShell dates

    - by Tim Ferrill
    I'm working on a module to pull data from Oracle into a PowerShell data table, so I can automate some analysis and perform various actions based on the results. Everything seems to be working, and I'm casting columns into specific types based on the column type in Oracle. The problem I'm having has to do with null dates. I can't seem to find a good way to capture that a date column in Oracle has a null value. Is there any way to cast a [datetime] as null or empty?

    Read the article

  • PHP reporting error. DB verify how to?

    - by iamfab
    Error reporting Notice: Undefined variable: random_chars in wamp\www\php_sandbox\idgen.php on line 21 Call Stack: # Time Memory Function Location 1 0.0045 678928 {main}( ) ..\idgen.php:0 GPB7446 How do I fix this error? Using this code like an automatic unique id generator. How do I connect to DB to verify code is truly unique before allowing it to be assigned to a user creating a new account? Thanks <?php $characters = array( "A","B","E","F","G","H","J","K","M","N","P","R","S","T","W","X","Y","Z"); $keys = array(); while(count($keys) < 3) { $x = mt_rand(0, count($characters)-1); if(!in_array($x, $keys)) { $keys[] = $x; } } foreach($keys as $key){ $random_chars .= $characters[$key];} $randNum = rand(2327,9987); $randLet = rand(2327,9987); echo $random_chars . $randNum; ?>

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

  • node.js storing gamestate, how?

    - by expressnoob
    I'm writing a game in javascript, and to prevent cheating, i'm having the game be played on the server (it's a board game like a more complicated checkers). Since the game is fairly complex, I need to store the gamestate in order to validate client actions. Is it possible to store the gamestate in memory? Is that smart? Should I do that? If so, how? I don't know how that would work. I can also store in redis. And that sort of thing is pretty familiar to me and requires no explanation. But if I do store in redis, the problem is that on every single move, the game would need to get the data from redis and interpret and parse that data in order to recreate the gamestate from scratch. But since moves happen very frequently this seems very stupid to me. What should I do?

    Read the article

  • Efficient way to access a mapping of identifiers in Python

    - by sixbelo
    I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers. Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key. My question is what is the best way to do this if the CSV file increases to 100K+ records? Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?

    Read the article

  • INSERT INTO ... SELECT ... vs dumping/loading a file in MySQL

    - by Daniel Huckstep
    What are the implications of using a INSERT INTO foo ... SELECT FROM bar JOIN baz ... style insert statement versus using the same SELECT statement to dump (bar, baz) to a file, and then insert into foo by loading the file? In my messing around, I haven't seen a huge difference. I would assume the former would use more memory, but the machine that this runs on has 8GB of RAM, and I never even see it go past half used. Are there any huge (or long term) performance implications that I'm not seeing? Advantages/disadvantages of either?

    Read the article

  • How to prevent the symbol "&" from being replaced by "&amp;

    - by tonsils
    Hi, Hoping someone could pls let me know how to prevent the symbol "&" from being replaced by "&amp;" within my URL, specifically within javascript? Just to expand on requirement, I am getting my url from an oracle database table, which I then use within Oracle Application Express, to set the src attribute of an iframe to this url. FYI, the url stored in the Oracle table is actually stored correctly, i.e. http://mydomain.com/xml/getInfo?s=pvalue1&f=mydir/Summary.xml what appears in my use when trying to pass over into iframe src using javascript is: http://mydomain.com/xml/getInfo?s=pvalue1&amp;f=mydir/Summary.xml which basically returns a page cannot be found Hope this clarified further my issue. Thanks.

    Read the article

  • A step-up from TiddlyWiki that is still 100% portable?

    - by Smandoli
    TiddlyWiki is a great idea, brilliantly implemented. I'm using it as a portable personal "knowledge manager," and these are the prize virtues: It travels on my USB flash memory stick and runs on any computer, regardless of operating system No software installation is needed on the computer (TiddlyWiki merely uses the Internet browser) No Internet connection is needed In terms of data retrieval functionality, it mimics a relational database (use of tags and internal links) Let's say I've got a million words of prose in 4,000 tiddlers (posts). I'm still testing, but it looks like TiddlyWiki gets very slow. Is there an app like TiddlyWiki that keeps all the virtues I listed above, and allows more storage? NOTE: Separation of content and presentation would be ideal. It's nifty that TiddlyWiki has everything in a single HTML document, but it's unhelpful in many ways. I don't care if a directory of assorted docs is needed (SQLite, XML?), as long as it's functionally self-contained.

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • SQL Server 2008 Bring Database Online trying to open a file from a drive that doesn't exist

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have domain keys and active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas?

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >