Search Results

Search found 40567 results on 1623 pages for 'database performance'.

Page 269/1623 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Monitoring the Server load In Windows NT and triggering a scheduled task

    - by Gnanesh
    Hi, I am having the following problem. I am running a Windows NT server. I need to monitor the server utilization continuously (automated process) and need to know if the server load is high. And if it high I need to trigger a scheduled task. Can we write a VB script in order to do this? Can someone please help me? Kindly let me know in case you require more info on this Thanks

    Read the article

  • Configuring SQL Server Audit Logging with Powershell

    - by Jonathan Kehayias
    One of the standard configuration options that I set on all SQL Server installs is to log Failed Login Attempts to the SQL Server Error Log.  I recently inherited an environment that this option wasn’t standardized across all of the servers and needed to configure it for multiple servers in a scripted manner.  There are a couple of ways to handle this kind of task.  First I could log on to every server in SSMS, open the Server Properties, and set the option on the Security sheet for...(read more)

    Read the article

  • Throughput = BS * IOPS?

    - by Marvin
    I've seen in many places that throughput = bs * iops should be true. For example writing at 128k block size to a SAS disk that can support 190 IOPS should give a throughput of ~23 MBps - 23.75(MBs) = 128(BS)*190(SAS-15 IOPS)/1024. Now when I tested it in a VM against a monster NetApp filer I got theses results: # dd if=/dev/zero of=/tmp/dd.out bs=4k count=2097152 8589934592 bytes (8.6 GB) copied, 61.5996 seconds, 139 MB/s To view the IO rate of the VM I used iostat and esxtop, and they both showed around 250 IOPS. So to my understanding the throughput was supposed to be ~1000k: 1000(KBs) = 4(BS)*250(IOPS). dd of 8GB is twice the size of RAM of course, so no page caching here. What am I missing? Thanks!

    Read the article

  • Data Aggregation of CSV files java

    - by royB
    I have k csv files (5 csv files for example), each file has m fields which produce a key and n values. I need to produce a single csv file with aggregated data. I'm looking for the most efficient solution for this problem, speed mainly. I don't think by the way that we will have memory issues. Also I would like to know if hashing is really a good solution because we will have to use 64 bit hashing solution to reduce the chance for a collision to less than 1% (we are having around 30000000 rows per aggregation). For example file 1: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,50,60,70,80 a3,b2,c4,60,60,80,90 file 2: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,30,50,90,40 a3,b2,c4,30,70,50,90 result: f1,f2,f3,v1,v2,v3,v4 a1,b1,c1,80,110,160,120 a3,b2,c4,90,130,130,180 algorithm that we thought until now: hashing (using concurentHashTable) merge sorting the files DB: using mysql or hadoop or redis. The solution needs to be able to handle Huge amount of data (each file more than two million rows) a better example: file 1 country,city,peopleNum england,london,1000000 england,coventry,500000 file 2: country,city,peopleNum england,london,500000 england,coventry,500000 england,manchester,500000 merged file: country,city,peopleNum england,london,1500000 england,coventry,1000000 england,manchester,500000 The key is: country,city. This is just an example, my real key is of size 6 and the data columns are of size 8 - total of 14 columns. We would like that the solution will be the fastest in regard of data processing.

    Read the article

  • Changing Recovery Model in Replicated Database

    - by Rob
    I now am the proud owner of two servers that replicate with each other. I had nothing to do with the install, but (of course), now i have to support the databases. Both databases are in the Simple recovery model, but the users want to ensure as little data loss as possible so I'm thinking that I should change the recovery model over to full and start doing transaction log backups. I wasn't planning on backing up the subscribing database, only the publisher. Is this the right plan? Do I need to switch both the Subscriber and and the publisher to Full, or can I leave the subscriber in Simple, but have the Publisher in Full? When I change the recovery model in one (or both) do the databases need to be offline? Thanks

    Read the article

  • Pros/cons to turning off cable modem

    - by Jay
    A little off the wall perhaps, but ... I have a cable modem and a router for a wireless home network. Is it a good or a bad idea to turn it off at night and during the day when we're all at work or school? Or should I leave it on 24/7. I was thinking that leaving it on constantly makes me more vulnerable to hackers, not to mention wasting electricity. (Though I'd guess the amount of electricity used by a cable modem and a router is probably pretty trivial. Still, every little bit helps.) When I have turned it off and turned it on again, it takes several minutes for it to go through its little dialog with the cable company and get me connected to the Internet again, which is annoying but not a big deal. Anyone know any good reasons one way or the other?

    Read the article

  • DB API for shell scripting (any shell)

    - by foampile
    I am faced with some legacy shell scripts that run batch data processing jobs in Oracle using SQL+. For the most part, the data tier does not have to communicate back to the script with retrieved data to be passed for shell-level processing but in a few cases it does. The problem is, SQL+ is really meant to be an end user app and not an API that can communicate with other clients programmaticaly. That is why people have invented APIs such as DBD::DBI for Perl, JDBC for Java, ODBC etc. The way it is done is they invoke SQL+ and then parse the output, which is clearly designed for human eye consumption, using tools like sed and awk. The whole thing is at best a hack and very prone to bugs. Since this client is rather conservative with their technology, they don't want to scale their scripts up to Perl or Python where there are data access APIs. So I am wondering whether there are similar APIs for shell, e.g. K or bash. What I would like is if an API would return data in a 2-dimensional array or strings (for the lack of type setting) so that I can just read DB data like that. The way they do it now is akin to parsing regular web page HTML to get a single stock quote rather than cleanly calling a web service and be done with it. Anybody know of a product I can use? Thanks

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • Do you have Standard Operating Procedures in place for SQL?

    - by Jonathan Kehayias
    The last two weeks, I have been Active Duty for the Army completing the last phase of BNCOC (Basic Non-Commissioned Officers Course) for my MOS (Military Occupational Specialty).  While attending this course a number of things stood out to me that have practical application in the civilian sector as well as in the military.  One of these is the necessity and purpose behind Standard Operating Procedures, or as we refer to them SOPs.  In the Army we have official doctrines, often in...(read more)

    Read the article

  • MySQL vs. SQL Server Go daddy, What is the difference bewteen hosted DB and App_Data Db

    - by Nate Gates
    I'm using Goddady for site hosting, and I'm currently using MySQL, because there are less limits on size,etc. My question is what is the difference between using a hosted Godaddy Db such as MySQL vs. creating a SQL Serverdatabase in the the App_Data folder? My guess is security? Would it be a bad idea to use a SQL ServerDB thats located in the App_Data folder? Additional Well I am able to create a .mdf (SQL Server DB file) in the App_Data folder, but I'm really unsure if should use that or not, If I did use it it would simplify using some of the Microsoft tools. Like I said my guess is that it would be less secure, but I don't really know. I know I have a 10gb, file system limit, so I'm assuming my db would have to share that space.

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • PHP5 without PHP4 compatibility

    - by Serty Oan
    This is the opposite question of this one. My purpose is that there is no need when you write only PHP5 code to have PHP4 compatibility. It is not helpful at all and due to differences between object models in those two version, it must be penalizing on interpreter performances (or am I wrong ?). So my questions are : is it possible to recompile PHP5 without retrocompatibility with PHP4 or to disable it in any other way to gain performances ? if it is not possible, is there a project somewhere which aim that goal ?

    Read the article

  • What is the best way to design a table with an arbitrary id?

    - by P.Brian.Mackey
    I have the need to create a table with a unique id as the PK. The ID is a surrogate key. Originally, I had a natural key, but requirement changes have undermined this idea. Then, I considered adding an auto incrementing identity. But, this presents problems. A. I can't specify my own ID. B. The ID's are difficult to reset. Both of these together make it difficult to copy over this table with new data or move the table across domains, e.g. Dev to QA. I need to refer to these ID's from the front end, JavaScript...so they must not change. So, the only way I am aware of to meet all these challenges is to make a GUID ID. This way, I can overwrite the ID's when I need to or I can generate a new one without concern for order (E.G. an int based id would require I know the last inserted ID). Is a GUID the best way to accomplish my goals? Considering that a GUID is a string and joining on a string is an expensive task, is there a better way?

    Read the article

  • Review the New Migration Guide to SQL Server 2012 Always On

    - by KKline
    I had the pleasure of meeting Mr. Cephas Lin, of Microsoft, last year at the SQL Saturday in Indianapolis and then later at the PASS Summit in the fall. Cephas has been writing content for SQL Server 2012 Always On. Cephas has recently published his first whitepaper, a migration guide to SQL Server AlwaysOn. Read it and then pass along any feedback: HERE Enjoy, -Kev - Follow me on Twitter !...(read more)

    Read the article

  • use of tcp_delack_min on redhat linux (kernel 2.6.18)

    - by user41466
    Hello, we're moving from Solaris to Redhat Linux, and trying to duplicate our low-latency setup, that, on solaris, includes the ndd settings related to TCP NO DELAY, and NAGLE ALGORITHM. I got the impression that those parameters are not all configurable system-wide, but still found some info. we have configured our applications to run with no nagle algorithm, but that is not sufficient. we have found an interesting RH article talking presenting the tcp_delack_min parameter, however, when browsing /proc/sys/net/ipv4/ , I can't find it there. would it be safe to assume that simply "adding" the parameter as it's said on the doc would be enough, or rather that the option is not supported by this version (would be strange, as RH specify that it "can be performed on a standard Red Hat Enterprise Linux installation") ? any other idea / recommendation to improve latency further ? thanks

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • HP Probook 4530s great specs, but lagging. Hard Drive?

    - by Mark
    I have this laptop, which has a i3 processor, 4gb memory, 7200rpm hard drive. So there is nothing wrong with the specs. Even when I have no applications open, simply closing and opening windows, lags. Or opening the start menu, or dragging icons across the desktop. sometimes even the cursor lags. So I checked out the resource monitor, and the resources using disk activity are svchost Avast ------- my antivirus, but not much System (PID 4) ------ This is using a huge chunk The total disk activity fluctuates between %50 - %100

    Read the article

  • Can anyone explain these differences between two similar i7 processors? [closed]

    - by Brian Frost
    I have two systems I've just built. They both have i7 processors and Asus P8Z77 motherboards. When I run a simple processor loop benchmark that I wrote in Delphi some time back I get one machine showing nealry twice as fast as the other. I then used CPU-Z to dump me the details of the hardware and I see that the fast machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2700K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 3610.7 MHz Multiplier x FSB 36.0 x 100.3 MHz Stock frequency 3500 MHz the slow machine shows: Processor 1 ID = 0 Number of cores 4 (max 8) Number of threads 8 (max 16) Name Intel Core i7 2600K Codename Sandy Bridge Specification Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz Package (platform ID) Socket 1155 LGA (0x1) CPUID 6.A.7 Extended CPUID 6.2A Core Stepping D2 Technology 32 nm TDP Limit 95 Watts Core Speed 1648.2 MHz Multiplier x FSB 16.0 x 103.0 MHz Stock frequency 3400 MHz i.e the slow machine has a 2600k to the fast machine 2700k. The very different "Multiplier x FSB" must be significant but I dont understand how two processors with a very 'similar' number can be so different. To get the machines the same must I copy the processors or is there some clever setting that I can change? Thanks for any help. Brian.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Very low throughput on 10GbE network

    - by aix
    I have two Linux machines, each equipped with a Solarflare SFN5122F 10GbE NIC. The two NICs are connected together with an SFP+ Direct Attach cable. I am using netperf to measure TCP throughput between the two machines. On one box, I run: netserver and on the other: netperf -t TCP_STREAM -H 192.168.x.x -- -m 32768 I get: MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.x.x (192.168.x.x) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 32768 10.02 1321.34 The measured throughput is 1.3Gb/s. This is 7.5x below the theoretical maximum, and only 30% faster than 1GbE. What steps can I take to troubleshoot this?

    Read the article

  • perf tuning for ESX vmfs3 on RAID

    - by maruti
    looking for recommendations on ESX4 OS - VMFS version3: RAID-5 : matching the stripe size with VMFS block size? (64K, 128K etc) RAID controller options: "adaptive read ahead, write-back" on PERC 6i 90% VMs on server are Windows (2008, 2003, Vista etc, SQL 2005 etc) i have read that smaller stipes are good for writes and larger for reads. Since this is virtual env, not sure whats good.

    Read the article

  • How to measure Citrix receiver video buffer framerate?

    - by Rex Hardin
    How can I measure Citrix receiver framerate at the client end? I've tried FRAPS, but that doesn't seem to work. Is there a way to determine how many "frames" are sent per second from the client side? Assume a Windows 7 client connecting to a Windows 7 VM using the Citrix web plugin. To restate the question, how could I potentially measure the Citrix client's frame display rate for some 200px^2 area? Assume that the remote VM will try and play back a frame sequence at 24fps. Will any frames drop?

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >