Search Results

Search found 60836 results on 2434 pages for 'system io directory'.

Page 647/2434 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Cloning Windows 2003 Server to new hard drive results in failure

    - by Level1Coder
    Scenario: Old hdd is a Seagate 320gb SATA drive New hdd is a WD 320gb SATA drive Created an exact clone and replaced old hdd with new hdd. Boot up with new hdd, it gets into Windows 2003 server environment but things look weird. Lots of system event failures in the event viewer log. System is barely unusable, critical services are all down. Boot up with old hdd, everything is fine. QUESTION: Is it possible to do a simple clone of a Windows 2003 server system? All I'm changing is the hard drive, everything else stays the same (old CPU/old mobo/etc..)

    Read the article

  • IIS - Script for repeated hacks on a website

    - by dodegaard
    I currently have a site that is armored by ELMAH as its reporting mechanism. Each time someone hits a URL that is incorrect it notifies me or logs to the system. This is annoying for someone fat-fingering the URL with a misspelling but great when a hacker is trying to crack a site of mine. Has anyone ever written a script for IIS 7 on Win 2K8 that blocks an IP based on repeated attempts to hit a website? I've looked at Snort and other IDS systems but if I could get a script that could be linked to my ELMAH system it might be the perfect thing. PowerScript, etc. is what I was thinking. Hints and recommendations are wonderful and if you think a true intrusion detection system is recommended give me your ideas. Thanks in advance.

    Read the article

  • How can I remount an NFS volume on Red Hat Linux?

    - by user76177
    I changed the user id of a user on an NFS client that mounts a volume from another server. My goal is to get the 2 users to have the same id, so that both servers can read and write to the volume. I changed the id successfully on the client system, but now when I look at the NFS mount from that system, it reports the files being owned by the old id. So it looks like I need to "refresh" that mount. I have found many instructions on how to remount, but each seems slightly different according to the type of system. Is there a simple command I can run to get the mounted volume to refresh so that it interprets the new user settings?

    Read the article

  • Openvz: What exactly does it mean when tcpsndbuf failcnt increases? Why must there be a minimum difference between limit and barrier?

    - by Antonis Christofides
    When the failcnt of tcpsndbuf increases, what does this mean? Does it mean the system had to go past the barrier, or past the limit? Or, maybe, that the system failed to provide enough buffers, either because it needed to go past the limit, or because it needed to go past the barrier but couldn't because other VMs were using too many resources? I understand the difference between barrier and limit only for disk space, where you can specify a grace period for which the system can exceed the barrier but not the limit. But in resources like tcpsndbuf, which have no such thing as a grace period, what is the meaning of barrier vs. limit? Why does the difference between barrier and limit in tcpsndbuf be at least 2.5KB times tcpnumsock? I could understand it if, e.g., tcpsndbuf should be at least 2.5KB times tcpnumsock (either the barrier or the limit), but why should I care about the difference between the barrier and the limit?

    Read the article

  • One Way Sync with Dropbox?

    - by user244805
    Is there any way I can mirror a dropbox folder to my C drive by just running a portable file? Extra background information because I know you guys hate it when you don't get the entire situation: I go back to University in fall and I need a new storage solution. I decided to use DropBox to sync my tiny University files (< 5 MB). I need to access these files from 4 machines: Windows 7 Home machine Windows 7 University A machine Windows 7 University B machine Android tablet 1 and 4 are a non-issue. The problem lies with 2 and 3. I want to be able to edit my files on 2 and 3 but those machines are not mine. There is an easy fix. Run a portable version of the DropBox syncer on a USB drive. But the problem is that I don't want to carry a USB drive around with me all the time. In that case, I can just run the small portable DropBox syncer off the internet. But where will it to store the files? A temporary directory on the C drive. There is only one issue left: there are hundreds of machines that I will randomly use that fit in categories 2 and 3. My portable DropBox syncer will notice that the temporary directory is empty on each new PC I use and instead of downloading my DropBox folder to the machine, it syncs the other way around i.e. it deletes my entire DropBox. The solution is to mirror my DropBox onto the temporary directory before running the DropBox syncer.

    Read the article

  • How to install ported Linux software on a Mac? (MacPorts, Fink, anything better?)

    - by Ben Alpert
    On my Mac OS X machine, how would you recommend I install various software that's been ported from Linux? I don't install such software very frequently, but I've been using MacPorts and it always seems quite slow, presumably because it has to compile the packages on-the-fly. I'd much prefer a package management system that has binary packages, saving me the need to compile things every time I want to download something new. I think Fink has binaries for some of the packages, but I usually see MacPorts recommended as the system to use. Which do you think works better and why? (Or is there another system that I haven't heard of?)

    Read the article

  • Is rsync --delete safe in case of disk failure

    - by enedene
    I have two data hard drives on my Linux server and I use second as a backup for a first drive. I use rsync for that purpose. An example would be: rsync -r -v --delete /media/disk1/ /media/disk2/ What this does is that it copies every file/directory from /media/disk1/ to /media/disk2/ but also deletes any difference. For example, lets say that files A and B but not file C are on disk1, and on disk2 there is no A and B files, but there is C. The result would be that after the command on disk2 I'd have files A and B, but file C would be deleted, just like on disk1. Now, a rather disastrous scenario had crossed my mind; what if disk1 dies, system continues to work since system files are on my system disk, but when rsync tries to backup my data on disk2 from broken disk1, it deletes all the files from disk2 because it can't read anything on disk1. Is this a possible scenario, or is there a protection from it build in rsync?

    Read the article

  • What would cause a query being ran from SSMS on local box to run slower then from remote box

    - by Racter
    When I run a simply query such as "Select Column1, Column2 from Table A" from within SSMS running on my production SQL Server the results seems to take extremely long (45Min). If I run the same query from my dev system’s SSMS connecting to the production SQL Server the results return within a few seconds (<60sec). One thing I have notices is if the system was just rebooted performance is good for a bit. It is hard to determine a time as I have had it start running slow very quickly after reboot but at most it performed good for 20min and then start acting up. Also, just restarting the SQL service does not resolve the issue or provide a temporary performance boost. Specs for Server are: Windows Server 2003, Enterprise Edition, SP2 4 X Intel Xeon 3.6GHz - 6GB System Memory Active/Active Cluster SQL Server 2005 SP2 (9.0.3239)

    Read the article

  • Putting a whole linux server under source control (git)

    - by Tobias Hertkorn
    I am thinking about putting my whole linux server under version control using git. The reason behind it being that that might be the easiest way to detect malicious modifications/rootkits. All I would naively think is necessary to check the integrity of the system: Mount the linux partition every week or so using a rescue system, check if the git repository is still untempered and then issue a git status to detect any changes made to the system. Apart from the obvious waste in disk space, are there any other negative side-effects? Is it a totally crazy idea? Is it even a secure way to check against rootkits since I most likely would have to at least exclude /dev and /proc ?

    Read the article

  • How do I set the TEMP environment variable for the "Network Service" user?

    - by Chris Phillips
    We have a system that uses Path.GetTempFile and Path.GetTempPath calls to work with temporary files fairly frequently. This system also runs as the "Network Service" user. We're finding that we're running out of room on the C drive (for other issues, our temp files are cleaned up correctly) and would like to be able to move the temp directory to a different drive. The easiest solution to this seems to be to change the TMP or TEMP environment variables for the Network Service user, but I only seem to be able to set my own user or the "system" variables that are overwritten by the Network Service user profile. How do I set these variables for the Network Service user?

    Read the article

  • Is WinRT really as secure as it's made out to be?

    - by IDWMaster
    Prior to releasing Windows 8, Microsoft claimed that all WinRT apps are cleanly removed from your computer after uninstalling them, and that WinRT apps should not interfere with other running applications, because they are ran in a "sandboxed" environment. Microsoft has also claimed numerous times on Channel9 that Windows 8 apps are not ran in a VM. So my question is; are these claims accurate? If the application is not running inside of a VM, how is it possible to protect the system against malicious code at runtime, assuming the attacker was able to bypass the screening process of the Windows Store system? Microsoft allows "native code" in WinRT apps, so wouldn't it be possible (using hand-coded assembly or some odd pointer manipulation trick to call functions outside of the sandboxed environment and interfere with the rest of the system, if it's really "native code" and not some VM?

    Read the article

  • Common folder in linux

    - by rks171
    I have two users on my Ubuntu machine. I want to share some media files between these users, so I created a directory in /home/ called 'media'. I made the group 'media' and I added my user 'rks171' to the group 'media'. So: sudo groupadd media sudo mkdir -p /home/media sudo chown -R root.media /home/media sudo chmod g+s /home/media As was described in this post. Then, I added my user to the group: sudo usermod -a -G media rks171 Then I also added write permission to this folder for my group: sudo chmod -R g+w media So now, doing 'ls -lh' gives: drwxrwsr-x 2 root media 4.0K Oct 6 09:46 media I tried to copy pictures to this new directory from my user directory: mv /home/rks171/Pictures/* /home/media/ And I get 'permission denied'. I can't understand what's wrong. If I simply type, 'id', it doesn't show that my user, rks171, is part of the 'media' group. But if I type, 'id rks171', then it does show that my user, rks171, is part of the 'media' group. Anybody have any ideas why I can't get an files into this common folder?

    Read the article

  • Mounting fuse sshfs fails when invoked by Cron on FreeBSD 9.0

    - by Tal
    I have a remote server filesystem that I'm attempting to mount locally on a FreeBSD 9 machine via FUSE sshfs, and Cron for a backup routine. I have ssh keys between the boxes setup to allow for passwordless login as the root user on the local machine. Cron is set to run the following script (in Root's crontab): #!/bin/sh echo "Mounting Share" /usr/local/bin/sshfs -C -o reconnect -o idmap=user -o workaround=all <remote user>@<remote domain>.com: /mnt/remote_server As root, I can run this script on the command line without issue, and without being asked for a password the share mounts successfully. Yet, when run by Cron the script fails. The path to sshfs is identical to the value of which sshfs Here is the email root receives from the Cron Daemon: X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Mounting Share fuse: failed to exec mount program: No such file or directory fuse: failed to mount file system: No such file or directory I'm stumped as to why I'm receiving No such file or directory in this instance. It further seems odd given that the paths appear to be correct. I've also attempted to compare the output of env on the shell with env inserted into the script. I don't see any environment variables that should cause this trouble. At bootup, FUSE reports its version as: fuse4bsd: version 0.3.9-pre1, FUSE ABI 7.8 Help me ServerFault wizards, you're my only hope!

    Read the article

  • Have a set a cgi scripts shared by multiple domains

    - by rpat
    Goal: Have multiple domains share a set of cgi(perl) scripts Environment: Apache 2.0 on a dedicated Cent OS server. (Apache configuration files generated by cPanel) I have dozens of domains on the dedicated server. The domains set up by cPanel under VirtualHost section. I have almost no knowledge of Apache. Most of what I do is taken care of by cPanel. I would like to put a set of scripts under one directory (perhaps under / or /opt ) and for each of the domains, under the individual cgi-bin, I would like to create a symbolic link to this common directory. This way I am hoping to avoid having to keep a copy of scripts for every domain. Since Apache config files are generated by cPanel, I would not like to manually make changes to those. Beside, I could mess things up. I see that cPanel recommends use of include files rather than changing the httpd.conf Perhaps I need to have the following of symbolic links enabled in the cgi-bin directory and allow the web server user execute the scripts not owned by it. May be I am making things more complicated than they are. I would be glad to use any other means to achieve my goal. Thanks in advance for your help. *I asked this on stackoverflow and some one suggested that I could ask this on serverfault.

    Read the article

  • Windows 7 PATH not expanding

    - by trinithis
    I am using the following to create and edit environment variables for Windows 7. Control Panel\All Control Panel Items\System -> Advanced system settings -> Environment Variables Under System variables I have the following pertinant variables: PROG32=C:\Program Files (x86) REALDWG_SDK_DIR=%PROG32%\Autodesk\RealDWG 2011 Path=%REALDWG_SDK_DIR%;%PROG32%\Haskell\bin However, the following happens: C:\>echo %PROG32% C:\Program Files (x86) C:\>echo %Path% %REALDWG_SDK_DIR%;C:\Program Files (x86)\Haskell\bin Is it possible to have a chain of variables expand? If I rename Path to something else, I sometimes get the problem, and sometimes I don't.

    Read the article

  • Apache Request IP Based Security

    - by connec
    I run an Apache server on my home system that I've made available over the internet as I'm not always at my home system. Naturally I don't want all my home server files public, so until now I've simply had: Order allow, deny Deny from all Allow from 127.0.0.1 in my core configuration and just Allow from all in the htaccess of any directories I wanted publicly viewable. However I've decided a better system would be to centralise all the access control and just require authentication (HTTP basic) for requests not to 127.0.0.1/localhost. Is this achievable with Apache/modules? If so how would I go about it? Cheers.

    Read the article

  • Windows XP computer reboots at start

    - by Jonas
    I have trouble with a Windows XP computer. After Windows is started and I can see the desktop background (sometimes I can use the system a few seconds), then the system is rebooted before I can do anything. I have used a Windows XP CD and runned chkdsk /r from the repair console. But it didn't help. I have also tried booting in "safe mode" but it didn't help. The C:\Windows\Minidump directory is empty. What can I do to solve this? UPDATE: I have now placed the harddrive in another computer and I have access to all data. Except from copying all data, is there anything I can make with the system so I can boot from the harddrive again? Is it "safe" to install windows on the same disk and directory - so I can access the data but not run the applications?

    Read the article

  • mod_rewrite redirect subdomain to folder

    - by kitensei
    I have a wordpress blog at the url http://www.orpheecole.com, I would like to setup 3 subdomains (cycle1, cycle2, cycle3) being redirected to their folders (1 subdomain = 1 wp blog, no multisite enabled) The file tree looks like this: /var/www/orpheecole.com/ /var/www/cycle1.orpheecole.com/ /var/www/cycle2.orpheecole.com/ /var/www/cycle3.orpheecole.com/ the following .htaccess try to redirect to /var/www/orpheecole.com/cycleX instead of its own directory, but id it's possible i'd rather redirect every subdomain to its own www folder. my sites-enabled file for main site is # blog orpheecole <VirtualHost *:80> ServerAdmin [email protected] ServerName orpheecole.com ServerAlias *.orpheecole.com DocumentRoot /var/www/orpheecole.com/ <Directory /var/www/orpheecole.com/> Options -Indexes FollowSymLinks MultiViews Order allow,deny allow from all </Directory> ErrorLog /var/log/apache2/orpheecole.com-error_log TransferLog /var/log/apache2/orpheecole.com-access_log </VirtualHost> and the .htaccess located on /var/www/orpheecole.com/ looks like this <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www.* [NC] RewriteCond %{HTTP_HOST} ^([^\.]+)\.orpheecole\.com$ RewriteCond /var/www/orpheecole.com/%1 -d RewriteRule ^(.*) www\.orpheecole\.com/%1/$1 [L] # BEGIN WordPress RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress </IfModule> I tried to remove wordpress directives but nothing change, and the rewrite mod is enabled and working.

    Read the article

  • Proving file creation dates

    - by Nils Munch
    In a weird case surrounding copyrights of a software system I have developed, I use the fact that I have all the source files of the system in question, created long before I joined the company that claims to own the system. The company being sued by yours truely says that I have simply manipulated to files to appear to be from that date. Is it even possible to fake or manipulate creation dates ? And if so, how can I "prove" that the files really are that old ? Luckily, I stored my project on GitHub, whick confirmed the fact that the files are from that era, but that is besides the point. I run purely Apple OS X.

    Read the article

  • PATH env variable on Mac OS X and/or Eclipse

    - by Jason S
    When I print out the path in bash, it prints this: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin When I run System.out.println(System.getenv("PATH")); in Java running under Eclipse, it prints /usr/bin:/bin:/usr/sbin:/sbin How can I figure out why there is this discrepancy? I need to add /usr/local/bin to the PATH and make it available to Java apps under Eclipse. (note: I have made no modifications system paths, so these are the defaults set by the OS or perhaps by one or more of the applications i've installed.)

    Read the article

  • virtual host settings fail on multiple sites

    - by Ricalsin
    Wow. I'm puzzled. On my ubuntu system I've setup an apache2 server and configured three virtual hosts in the /etc/apache2/sites-available directory. a2ensite to symbolic link the sites-enabled. The first two work great; a simple url of localhost.mysitenames.com works great for the first two sites, both finding their DocumentRoot and Directory paths. The third always generates a Bad Request (Invalid Hostname) response. No server error.log as it never hits it. I've copied/pasted the working vhost files, made the minor changes to the ServerName, DocumentRoot and Directory and the same problem persists. I always "sudo /etc/init.d/apache2 restart" whenever I make a change. I've cleared the browser cache as well. No love. There's not a limit to the number of sites you can host, right? My goal was a localhost development environment with the expectation I can run any number of websites locally before pushing them to a live server. Any thoughts on how to debug this? Or, just a simple solution I am missing?

    Read the article

  • Maximizing after moving RDC window between different size monitors

    - by msorens
    My Win7 system has two monitors of different sizes. When I open a Remote Desktop Connection on one monitor set to use full screen, both the RDC window and the remote system's desktop fills the monitor. If I then move the window onto my second monitor (1-Restore Down button to make it movable; 2-Drag window to other monitor; 3-Maximize button to fill monitor) the RDC window fills the monitor, but the remote system's desktop remains the same size it was before. Thus, if I move from the larger to the smaller monitor I have scrollbars to see the whole remote desktop, while if I move from the smaller to the larger monitor the remote desktop occupies only a portion of the monitor. My workaround is to close the RDC window completely then re-establish it on the other monitor. Is there a way to avoid this overhead and just resize the remote desktop to fit?

    Read the article

  • My Notebook can't reboot after reinstall new operating systems without display driver installed?

    - by RawR Crew
    I have a small problem when I reinstalled my notebook with Windows 7 or Windows XP home edition. the problem is I can't reboot my system if I didn't install display driver (ATI-Radeon). I can't reboot my system because of the restart's button on shut down menu disappear, thats mean the system cant be rebooting. And when I install display driver, the reboot button in shutdown menu will be appear, thats mean I cant access reboot menu. I just want to know, Why does it happen? Does the display driver have influence with the reboot process?

    Read the article

  • High CPU Steal percentage on Amazon EC2 Instance

    - by Aditya Patawari
    I am experiencing high CPU steal percentage in a Amazon EC2 large instance. I know it means that my virtual CPU is waiting on the real CPU of the machine for time. My question is that what can I do to reduce this percentage and get maximum out of the CPU? Steal percentage is consistently at 20%. System load crosses 10 when this happens. I have checked memory and network and I am sure that they are not the bottleneck. Is that normal for such environment? Also are there any system level optimization techniques for reducing steal percentage form the virtual instance? avg-cpu: %user %nice %system %iowait %steal %idle 52.38 0.00 8.23 0.00 21.21 18.18

    Read the article

  • Outlook mail/calendar items give errors after server migration

    - by Mike B
    Last Friday our Exchange server was migrated, by our external system administrator, to a new server, with a new server name. Since then we have problems with the calendar/mail items that were created/sent/received on the old server: Reply to mails get bounced if we use auto complete in the To field. If we cancel auto complete and manually enter the (same) e-mail address then there's no problem. Our system administrator says this is because auto complete fills in the old server name (???). Calendar items created on the old server cannot be edited (without an error) and must be recreated if we want to change them. Our system administrator says these problems are normal with a server migration. I cannot believe this. There must be a better way. Am I right?

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >