Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 274/1762 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Ways to polling server status

    - by Yijinsei
    Hi guys, I create the same question is stackoverflow, but I was recommended to post my question here. So I apologies for those who saw this post twice. I am try to create a JSP page that will show all the status in a group of local servers. Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?

    Read the article

  • If-Modified-Since vs If-None-Match

    - by Roger
    This question is based on this article response header HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195 request header GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT If-None-Match: "10c24bc-4ab-457e1c1f" HTTP/1.1 304 Not Modified In this case browser is sending both If-None-Match and If-Modified-Since. My question is on the server side do I need to match BOTH etag and If-Modified-Since before I send 304. Or Should I just look at etag and send 304 if etag is a match. In this case I am ignoring If-Modified-Since .

    Read the article

  • Bash Shell Hangs on ?+Tab-complete

    - by michaelmichael
    I often use tab completion in Bash when completing directories, but I find that it hangs for an unacceptable amount of time if I accidentally include a question mark in the directory. I'd like to know why and how to prevent it if possible. Here's the scenario: I start a command and use the ~ key to represent home: ls ~?Desktop/co Oops! I held down the Shift for a split-second too long. I had intended for ? to be /. But (oh no!) muscle memory has already kicked in. I've hit the Tab before I noticed the mistake. Now I'm stuck waiting for the shell to beep angrily at me. Usually a minute or two. What happened? Why did the question mark cause it to hang and eventually beep? Any way to stop it from hanging?

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • How can I tell if I have PCI Express 2.0 or 2.1?

    - by Stefan Lasiewski
    I am looking at a variety of PCI Express cards, such as a SATA RAID Controller and a Video Card. Some of these cards say they only support PCI Express 2.1, not PCI Express 2.0. I know that my motherboard supports PCI Express 2-something, but the manual doesn't say '2.0' or '2.1'. How can I tell if the PCIe slot on my motherboard is PCI Express 2.0 or PCI Express 2.1? Is it possible to determine the PCIe type from the Windows or Linux commandline? I was under the impression that most PCI Express 2.1 devices are backwards compatible with PCI Express 2.0. Is it possible that the vendor is wrong in saying that PCI Express 2.1 is required?

    Read the article

  • Trying to determine the correct number of XFS allocation groups for postgresql server on Linux

    - by HBlend
    I am running a postgres 8.4.5 server on the linux 2.6.33.7 kernel on an 8 disk raid array with an LSI controller. Most of the tables are around 1GB or less. I know that XFS uses allocation groups (AG) to achieve I/O parallelism. My first question is, does this mean that if two tables are in the same AG, all I/O requests are queued to both of them if either is being read from/written to? If so, I assume I would want to spread my tables across as my allocation groups as possible, correct? Wouldn't this ensure that multiple users querying different tables would get the best performance?

    Read the article

  • Microsoft SQL Server 2005 Express Edition SP4 wasn't installed

    - by user754334
    I lost track of my account when my question was moved to superuser. I wasn't able to install Microsoft SQL Server 2005 Express Edition SP4 through automatic update so I downloaded the update from here and tried manual install, which was also failed with some error: The components that you are trying to install are already installed I checked the product version of sql server 2005 which returned 9.00.5000.00 through command Select @@version. Now the question is, if the automatic update wasn't able to apply the SP4 patch then how come the version is updated to 9.00.5000.00 ? There is no way to rollback or reinstall the patch as it takes the entire SQL Server which came with Visual Studio 2005 to be re-installed. Is there any other way I can verify that SP4 patch was properly applied? Edit: I used MBSA Tool to analyze the required updates which confirms that the Microsoft SQL Server 2005 Express Edition SP4 is missing.

    Read the article

  • Windows server's HDD Spin down daily/nightly - Does it makes sense?

    - by Riccardo
    A Windows Server 2003 R2 has the following hard disk configuration: - 3 internal hard disks attached to a 3Ware unit, configured in Raid 1 + spare unit - 3 external USB backup disks: 2 Verbatim 1TB (Samsung HD103SI) + 1 Western Digital 1TB (WD10EADS) The server runs 365 days per year, h24, however: - at daytime the server/user usage is limited to the internal hard disks - at nighttime there's no user usage, apart from scheduled maintenance tasks, basically the Server will be idle from 7PM to 8AM. apart from nighly backups (few hours). I was wondering if: (a) it makes any sense let Windows manage power savings, allowing disks to spin down accordingly, ** OR** let the disks stay awlays-on, to avoid permature wearing, due to continuous spin up/down (b) leave internal disks always on, and force external disks to power down while idle (this requires third party tools, such as Verbatim's Green button utility) Your thoughts?

    Read the article

  • how to make man page not disappear on exit

    - by Alan
    ...probably a silly question but I could not beat google into telling me the answer so posting here: I got 2 machines - Slackware 13 and Fedora 11. On the slack machine, when I use man I can scroll all the way to the bottom then exit man and the info stays in my terminal window (which I find very convenient as I can read it while typing the command in question, copy-paste the options, etc.). On fedora when I close man the man page info is gone. How can I configure man (or is it the terminal?) to not remove the man page info on exit?

    Read the article

  • Win 2008 single server development environment (architecture)

    - by Tommy Jakobsen
    I have a few questions about a test development environment that I’m setting up on this server: Intel Core i7-920 Quadcode incl. Hyper Threading 8 GB DDR3 RAM 2x 750 GB SATA-II (probably software RAID 1) The server is going to support max 5 users, maybe 10 when stressed. I was hoping that I could run all the following products on the same server: Windows Server 2008 R2 x64 w/ IIS SQL Server 2008 x64 (R2 when released) Team Foundation Server 2010 Sharepoint Foundation 2010 I know this sounds overkill, but remember that this is for development purpose and testing. This is not a production environment. My question if this will be possible at all? Should I run it all on one Windows 2008 installation, or should I run it in multiple virtual environments using Hyper-V? What do you think?

    Read the article

  • optimizing file share performance on Win2k8?

    - by Kirk Marple
    We have a case where we're accessing a RAID array (drive E:) on a Windows Server 2008 SP2 x86 box. (Recently installed, nothing other than SQL Server 2005 on the server.) In one scenario, when directly accessing it (E:\folder\file.xxx) we get 45MBps throughput to a video file. If we access the same file on the same array, but through UNC path (\server\folder\file.xxx) we get about 23MBps throughput with the exact same test. Obviously the second test is going through more layers of the stack, but that's a major performance hit. What tuning should we be looking at for making the UNC path be closer in performance to the direct access case? Thanks, Kirk (corrected: it is CIFS not SMB, but generalized title to 'file share'.) (additional info: this happens during the read from a single file, not an issue across multiple connections. the file is on the local machine, but exposed via file share. so client and file server are both same Windows 2008 server.)

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Shrink Partition on Production Server

    - by Campo
    SO our production server was only setup with one large partition. I have setup a standby server and properly partitioned it. Now the boss wants the production environment's partition shrunk. It is an HP DL380 G5 We have 4 hot swap drives in a raid 5. How best should I go about doing this. Seems like a bad idea to me. Should I use windows or HP to do the partitioning? What should I be aware of in a production environment? The idea is to put the site (Inetpub) on a separate partition instead of the C: drive. How much downtime should I expect? Is this a terrible idea? Anything else I have missed?

    Read the article

  • What is the DNS root zone and domain?

    - by Nimmy Lebby
    This might seem like a silly question but I want to get my terminology correct. Please do not delete. I will be more than happy to delete the question myself once I (with the help of a few people I hope) get to a consensus: This was my understanding: DNS root zone = . DNS root domain = (nameless) However, after reading the Wikipedia article, I'm not so sure: A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com. So this would lead me to believe: DNS root zone = . DNS root domain = . DNS root label = (nameless) Does this make sense? What is your understanding?

    Read the article

  • tcp flags in iptables: What's the difference between RST SYN and RST and SYN RST ? When to use ALL?

    - by Kris
    I'm working on a firewall for a virtual dedicated server and one of the things I'm looking into is port scanners. TCP flags are used for protection. I have 2 questions. The rule: -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP First argument says check packets with flag SYN Second argument says make sure the flags ACK,FIN,RST SYN are set And when that's the case (there's a match), drop the tcp packet First question: I understand the meaning of RST and RST/ACK but in the second argument RST SYN is being used. What's the difference between RST SYN and RST and SYN RST ? Is there a "SYN RST" flag in a 3 way handshake ? Second question is about the difference between -p tcp --tcp-flags SYN,ACK,FIN,RST SYN -j DROP and -p tcp --tcp-flags ALL SYN,ACK,FIN,RST SYN -j DROP When should ALL be used ? When I use ALL, does that mean if the tcp packet with the syn flag doesn't have the ACK "and" the FIN "and" the RST SYN flags set, there will be no match ?

    Read the article

  • Alternative to Windows Home Server (WHS) backups

    - by Adam Tegen
    Since Microsoft announced the end of life for WHS, are there any alternatives? Specifically, I am interested in recovering from a catastrophic disk failure with WHS. For example, this is my ideal scenario when a desktop hard-drive fails (has a bad virus, etc): Install a disk of the same size or greater Boot the desktop with the Recovery Disc Point the recovery application at the WHS Pick the machine, the drive(s) and the date of the backup Have a couple beers Reboot to a working machine as if nothing happened. I would need to slap multiple disks in the machine without raid. It sounds like LVM will work here. It would be nice, but not required to have de-duplication of files when multiple machines are backed up. (Single Instance Storage)

    Read the article

  • Verify server performance

    - by George Kesler
    I'm looking for a quick and SIMPLE way to verify that new servers are performing as expected. The most important metric is disk performance, second is network performance. I’m trying to prevent problems caused by misconfiguration of RAID arrays, NIC teaming etc. The solution should work with both physical and virtual servers. I don’t need sophisticated analysis with different workloads, just one set of benchmarks which I would run against a reference server and later compare to new ones. One problem is that most benchmarks are not giving accurate results when running on a VM.

    Read the article

  • Make Google chrome with specific user profile as default browser

    - by Kaushik Gopal
    Is it possible to set Google chrome with a custom user profile as the default browser? When I set google chrome as the default browser, it picks the "default" user profile as against the custom one I have setup. I tried setting google chrome as default browser after opening it from that particular user profile, but it doesn't seem to have an effect. I googled around but could only find another poor soul like myself who asked a similar question here: http://www.google.com/support/forum/p/Chrome/thread?tid=69f0a6e776ceab1c&hl=en There weren't any responses to that question. Cheers.

    Read the article

  • how to connect virtual box os and local machine

    - by Nrew
    This question is in connection to this question asked by a user before: http://superuser.com/questions/73470/virtualbox-vdi-file-to-vmware On how to convert vdi to vmdk or vmx using vmware converter. How do I connect the windows xp that is in virtual box to the local computer (windows 7) in a network. Because I got this error while I tried following this instruction: Give the IP address, username and password of the remote machine that you would like to convert and then hit next I got this error in vmware converter: Unable to connect the specified host 10.0.2.15 which is the ip address of the xp machine inside virtual box. It also said that there is a network configuration problem. And when I inputted the ip address from whatismyip.com which should be the same as the ip address on local machine. I didn't get the previous error but I got another one, it said that: insufficient permissions to connect to "ip address" What solution can you suggest for this problem?

    Read the article

  • How can I download django-1.2 and use it across multiple sites when the system default is 1.1?

    - by meder
    I'm on Debian Lenny and the latest backports django is 1.1.1 final. I don't want to use sid so I probably have to download django. I have my sites located at: /www/ and I plan on using mod_wsgi with Apache2 as a reverse proxy from nginx. Now that I downloaded pip and virtualenv through pip, can someone explain how I could get my /www/ sites which are yet to be made to all use django-1.2? Question 1.1: Where do you suggest I download django-1.2? I know you can store it anywhere but where would you store it? Question 1.2: After installing it how do you actually tie that django-1.2 instead of the system default django 1.2 to the reverse proxied Apache conf? I would prefer it if answers were more specific than vague and have examples of setups.

    Read the article

  • MX records and CNAMEs

    - by sly
    I realize similar questions were asked/answered on this, but I have a subtle detail to which I can not find answers anywhere. Let's exemplify with the following DNS entries: foo.example.com A 1.1.1.1 bar.example.com A 1.1.1.2 wee.example.com CNAME foo.example.com foo.example.com MX foo.example.com.s9a1.psmtp.com bar.example.com MX bar.example.com.s9a1.psmtp.com Note the last line. My institution has that kind of MX records, where for each MX record the value has the label prepended. Question 1 is, what is the motivation for this (why not just s9a1.psmtp.com)? Question 2 is more subtle and follows.. My understanding is that an MX record should not contain any aliases for neither label nor the value, i.e. the following would be bad practice: wee.example.com MX wee.example.com.s9a1.psmtp.com Then, how should the RRs look for the alias wee.example.com? Thanks!

    Read the article

  • Newbie one: Virtual Networks - Hyper-V - Remote Destktop - Only one phisical NIC

    - by josecortesp
    Hello everyone, I'll try to explain my situation and I'll apreciate any help: I have a phisical server (quad core, 4Gb ram, 1TB raid 10, etc) with Win Server 2008 R2 enterprise, running IIS, Printing, etc... Also, I want to set up 2 virtual Servers with 2008 R2 standart one with SQL Server and the other with Team Foundation. What i need is: Being able to access from inside the private phisical network, to Remote Desktops on each of the Virtual and the phisical Servers Had Access from the outside, using a router and port Forwarding, to the TFS server and the IIS server (one is virtualized, the other is phisical) This is it, but note that I only have one Phisical Nic. How do I configure this to work. When i set up the hyper-v role, on the wizard something like it showed up but I don't remmember what i choose, and right now, I cannot access none of the servers from remote desktop, not even from the phisical private network. Can anybody point me, what can i do? Thanks in advance (sorry 4 my english, i'm a spanish talker and my english isn't that good)

    Read the article

  • Getting BootMgr not found errors repeatedly on Win7 x64

    - by abszero
    So here is the basic configuration of the box: Primary RAID 1 (Mirror, Bootable): 2x 300GB WD SATA drives AMD Phenom Quad Core x64 @2.2 ASUS M3N78 Pro Board 4GB RAM Win 7 Ultimate Additionally, this box is a Host OS for several CentOS Boxes via VirtualBox. The box runs like a champ but, for whatever reason, everytime I restart the machine I get a BootMgr not found error when the box tries to boot. I pop in my Win DVD, select 'Repair Windows' then 'Fix Start Up Problems' and everything works fine...once. When I restart the box again I have to go back through this process. Any ideas on what is going on?

    Read the article

  • Terminal Services - MS Access Frequently "Not Responding"

    - by jonfhancock
    Exposition: We use a program built in MS Access that I serve via Terminal Services. I just installed a new TS Server with a Quad Core 2.6GHz Xeon, 8GB RAM, and 4 SATA drives in a RAID 0. In installed Server 2008 R2 (64bit obviously). It's only role is TS. The problem: With just a few sessions (under 10), I start getting frequent Not Responding messages in each session. When it happens, the users aren't doing anything particularly taxing, just form navigation and simple insert queries. I can live with some stalls, but it is visually jarring in WS08 because the screen goes gray, and it presents a dialog offering to wait or close with some other options. Questions: Any suggestions for improving performance and reducing hangs? Is it possible to disable the dialog (always wait) and screen graying?

    Read the article

  • Why Buy Hardrives with storage server from a vendor?

    - by Mark
    Hi all, Im just browsing around at storage server's like the Dell MD100/ MD3000 and the Sun J4200 and although the storage server seems reasonable (approx $3000-$4000 AUD) the hard-drives that you buy to go along with them seems exorbitantly expensive. And I'm not sure why. Surely at most they are using good quality RAID level 7200rpm SATA hdd, but even then they are still charging almost 4 times the price. What is the advantage to buying these from them. I can see if one fails then the vendor replacing it is convenient. But at that price you could buy double the amount of hdd and just claim on warranty directly with the manufacturer. It would be much cheaper and you wouldn't be relying on someone else to fix your problems. Is this the case of "you don't get fired if you buy IBM?" mentality or is there some reason I'm not grasping here? Cheers Mark

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >