Search Results

Search found 12676 results on 508 pages for 'virtual directories'.

Page 263/508 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Cooperative linux vs vm

    - by Rhythmic Algorithm
    What are the advantages / disadvantages of using cooperative linux like portable ubuntu for example compared to a qemu or any other virtual machine installation. Is one option notably faster than the other plus and other things that should be taken into consideration.

    Read the article

  • Problems while applying an svn patch to a mercurial repository

    - by user26453
    Patch file is made with TopirtiseSVN - Create Patch... Attempting to import patch into the mercurial repository using hg import patchfile. The problem I'm running into is that there seems to be problems with how hg looks for files referenced in the patch file: unable to find 'gui/gui/RemoteFramework.cpp' for patching 2 out of 2 hunks FAILED -- saving rejects to file gui/gui/RemoteFramwork.cpp.rej Seems to be an issue of where the patch was made in terms of directories and where it should be applied. Have tried playing with the --base option for hg import, but haven't gotten anywhere just yet. Anyone have any tips?

    Read the article

  • Blocking a specific URL by IP (a URL create by mod-rewrite)

    - by Alex
    We need to block a specific URL for anyone not on a local IP (anyone without a 192.168.. address) We however cannot use apache's <Directory /var/www/foo/bar> Order allow,deny Allow from 192.168 </Directory> <Files /var/www/foo/bar> Order allow,deny Allow from 192.168 <Files> Because these would block specific files or directories, we need to block a specific URL which is created by mod-rewrite and the page is dynamically created using PHP. Any ideas would be greatly appreciated

    Read the article

  • Sub-process /usr/bin/dpkg returned an error code (1)

    - by rohit
    Hey friends i am getting the following error when i am trying to purge shorewall root@aptosid:/etc# apt-get purge shorewall Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: shorewall* 0 upgraded, 0 newly installed, 1 to remove and 3 not upgraded. 1 not fully installed or removed. After this operation, 1,843 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 212702 files and directories currently installed.) Removing shorewall ... : not found/shorewall: 25: /etc/default/shorewall: :q Stopping "Shorewall firewall": not done (check /var/log/shorewall-init.log). invoke-rc.d: initscript shorewall, action "stop" failed. dpkg: error processing shorewall (--purge): subprocess installed pre-removal script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: shorewall E: Sub-process /usr/bin/dpkg returned an error code (1) root@aptosid:/etc# please help me out ...........?

    Read the article

  • Apache consuming large memory with Apache Mobile Filter

    - by VarunGupta
    I installed and configured Apache Mobile Filter module for Apache to redirect users to mobile version of our website, depending on their user agents. I configured the module to use WURFL. But as soon as I start Apache it hogs a large amount of memory (without any in-coming web requests): Resident Memory: 300 to 400 MB Virtual Memory: 300 to 650 MB Without this module, Apache was consuming much lesser memory (4 to 10 MB). What could be the reason here?

    Read the article

  • Adding git branch to bash prompt on snow leopard

    - by crayment
    I am using this: $(__git_ps1 '(%s)') It works however it does not update when I change directories or checkout a new branch. I also have this alias: alias reload='. ~/.bash_profile' Sample run: user@machine:~/dev/rails$cd git_folder/ user@machine:~/dev/rails/git_folder$reload user@machine:~/dev/rails/git_folder(test)$git checkout master Switched to branch 'master' user@machine:~/dev/rails/git_folder(test)$reload user@machine:~/dev/rails/git_folder(master)$ As you can see it is being set correctly but only if I reload bash_profile. I have wasted way to much time on this. I am using bash on snow leopard. Please help!

    Read the article

  • Access localhost on Mac OS X from Parallels machine

    - by AntonAL
    Hi, i need to test my web site, running on a local Mac, under several browsers in Windows. I use Windows XP, installed in Parallels Desktop. It would be great, when i will be able to access my http://localhost:3000 from Windows, sitting in virtual environment (Parallels). How to wire all the stuff up ?

    Read the article

  • How can I troubleshoot Virtualbox port forwarding from Windows guest to OSX host not working?

    - by joe larson
    There are a plethora of questions about virtual box port forwarding problems but none with my specific details. I have a Windows install living in Virtual Box, hosted within OSX. I've got several webservers running on localhost on different ports within the Windows install. I cannot for the life of me get port forwarding to work so I can access those webservers from OSX. My settings look like this (yes I have a NAT adapter): And in my vbox configuration file the relavent portion looks like this: <NAT> <DNS pass-domain="true" use-proxy="false" use-host-resolver="false"/> <Alias logging="false" proxy-only="false" use-same-ports="false"/> <Forwarding name="RLPWeb" proto="1" hostport="7084" guestip="127.0.0.1" guestport="7084"/> <Forwarding name="UtilWeb" proto="1" hostport="4040" guestip="127.0.0.1" guestport="4040"/> <Forwarding name="WCARLP" proto="1" hostport="8084" guestip="127.0.0.1" guestport="8084"/> <Forwarding name="WCAUtil" proto="1" hostport="4848" guestip="127.0.0.1" guestport="4848"/> </NAT> I've turned off the Windows firewall to ensure it is not interfering, and I am not running a firewall on OSX. Anyway, when I attempt to go to for example http://127.0.0.1:4040/ on any of my OSX browsers, it will eventually time out. The log file for this VM shows that it is correctly reading the settings and implying it's doing the right thing here: 00:00:08.286 NAT: set redirect TCP host port 4848 => guest port 4848 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 8084 => guest port 8084 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 4040 => guest port 4040 @ 127.0.0.1 00:00:08.286 NAT: set redirect TCP host port 7084 => guest port 7084 @ 127.0.0.1 00:00:08.290 Changing the VM state from 'LOADING' to 'SUSPENDED'. 00:00:08.290 Changing the VM state from 'SUSPENDED' to 'RESUMING'. 00:00:08.290 Changing the VM state from 'RESUMING' to 'RUNNING'. 00:00:08.337 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=000000012017d000 w=1834 h=929 bpp=32 cbLine=0x1CA8, flags=0x1 00:00:09.139 AIOMgr: Host limits number of active IO requests to 16. Expect a performance impact. 00:00:13.454 NAT: DHCP offered IP address 10.0.2.15 I've tried setting the Host IP to 127.0.0.1, and I've tried setting Guest IP blank and also 10.0.2.15. None of these seem to help. What else can I look at to troubleshoot this issue? Details of setup: OSX 10.6.8 Windows 7 Professional 64bit VirtualBox 4.1.2

    Read the article

  • Ubuntu + latest samba version, symlinks no longer work on share mounted in windows

    - by Roy Rico
    I just apt-getted (apt-got?) the latest software for my Ubuntu 9.10 linux box, and I noticed that samba was the included in the update. After the install, the symlinks in my home directory no longer work when mounted as a drive in my linux box. They worked literally seconds before I did the update. All my normal directories work just fine. Viewing the directory listing on the command line, all the files, dirs & links have the exact same permissions, yet this is the error I get: Location is not available L:\LinkDir is not accessible. Access is denied. I looked on the forums, and i saw this option for the smb.conf follow symlinks = yes wide symlinks = yes unix extensions = no I put those in, but they had no effect. Has anyone had this problem yet?

    Read the article

  • cPanel WHM virtualhost sample

    - by Prix
    Hi, Could anyone possible post a virtual host sample from a working httpd server, if possible with the most features enabled (like suPHP, suExec, php directives such as engine on off and others) ? The reason is that i wanted to see how it is formarted, and built per vhost... it's been a long time i dont use cPanel nor have it available so i can't really get it, i've been googling for it but havent found it at all. Much appreciated.

    Read the article

  • rsyslog from Heroku drain creates empty log files

    - by Jeff Lee
    I'm sending logs from my Heroku app to an rsyslog server, but the resulting log files seem to come up empty. The rsyslog configuration for receiving remote messages is as follows: $template RemoteDailyLog,"/var/log/remote/%hostname%/%$year%/%$month%/%$day%.log" :fromhost-ip, !isequal, "127.0.0.1" -?RemoteDailyLog & ~ My complete rsyslog configuration is available in this paste. This configuration appears to create the directories correctly. I see the Heroku app's logging hostname (of the form "d.xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx") appear in /var/log on the rsyslog host, which implies that log messages are successfully making it to the logging daemon, but the resulting logfiles are zero-size. I'm guessing the issue is with rsyslog, rather than Heroku, but I'm not sure where to look next.

    Read the article

  • Best way to monitorize a Grid of computers?

    - by marc.riera
    Hello, I've installed sun grid in 10 nodes, and one virtual master host. Now I have to monitorize all the resourses prior to launch it to production, but I don't know which is the best way. I've tried using xml-qstat, but it seems unstable. Any tips or suggestions? Anyone got experience on this? thanks.

    Read the article

  • HTTP/1.1 503 Service Unavailable Exchange 2003 OWA

    - by toups
    Receiving HTTP/1.1 503 Service Unavailable when trying to access exchange web access. Server 2003 & Exchange 2003 Have tried: Deleting virtual directories and rebuilding them in IIS Restarting Dismounting & mounting the public folder store & mailbox store It's not AppPoolQueueLength Restarting all services (all proper ones are running) Making it use ssl or not (regular http) -- same thing for both Installing IIS & exchange 2003 on a new 2003 box as a new FE server, OWA on that gets same error. Not sure why it's getting a 503... was working without an issue and randomly is stuck at this no matter what; ll users can still successfully connect and use via outlook, only the web access is having issues. Any suggestions would be appreciated.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc.

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • Accurate Windows equivalent of the Unix which(1) command

    - by SamB
    It's easy enough to write a simple script that works like the which(1) command from unix, which searches for a given command along the PATH. Unfortunately, the CreateProcess function is not so simple, so this type of script does not give accurate results: CreateProcess looks in a number of directories not in the PATH, looks for files with all of the extensions listed in PATHEXT, etc. Worse, who knows what might be added in future versions of Windows? Anyway, my question is: is there a robust, accurate which(1) equivalent for Windows, which always tells you what file CreateProcess would find?

    Read the article

  • Can't connect using Jail SFTP account

    - by Fazal
    I've been following this tutorial "Limiting Access with SFTP Jails on Debian and Ubuntu" and whilst I've had no errors setting it up, I've had issues on Ubuntu 10.04LTS logging in as a user on a virtualhost. I've changed my SSH port to 22022, and enter all the credentials when attempting to login. I ran these commands to add a user to the virtualhost: # useradd -d /srv/www/[domain] [username] # passwd [username] # usermod -G filetransfer [username] # chown [username]:[username] /srv/www/[domain]/public_html I should add that this is the only time I've setup the user they have no other /home directories or such. The directory that does exist is at /srv/www/example.com/public_html When I try using a desktop package such as cyberduck to login to the site, I keep getting a "Login failed with this username or password". I am completely lost as what to do next... The reason why I'm trying this method is because I want my clients to use SFTP and not FTP to upload files to their websites. Any help or direction is appreciated.

    Read the article

  • Repartition Ubuntu by command line?

    - by DisgruntledGoat
    On my server the filesystem includes these partitions: Filesystem Size Used Avail Use% Mounted on /dev/sda6 4.6G 929M 3.5G 21% / /dev/sda5 76M 20M 53M 27% /boot /dev/sda8 449G 199M 426G 1% /home /dev/sda7 4.6G 4.4G 0 100% /var (Output from df -ah) I'm storing the web sites and databases under /var and as you can see it's got full. The /home folder just has basic user directories and nothing else so I'd like to repartition the server so that /dev/sda8 is about 5GB, with the rest going to dev/sda7. What's the easiest way to do this via command line (i.e. SSH)?

    Read the article

  • Sharing storage on Linux and Solaris

    - by devlearn
    I'm looking for a solution in order to share a san mounted volume between several hosts running on Linux (RHEL) and/or Solaris (Sparc). Note that I basically need to share a set of directories containing large binary files that are accessed in random R/W mode. I have the following reqs : keep the data on the SAN suitable i/o performances as the software is pretty demanding on IOPS stick to a shared file system as I can't afford a cluster fs (lack of MDS/OSS infrastructure) compression could be really usefull For now I've found only the following candidates : GFS2 , supports Linux only, no compression VxFS , supports Linux and Solaris, compression supported So if you have some suggestions for this list, I'll really welcome them. Thanks in advance,

    Read the article

  • htaccess rewrite and auth conflict

    - by Michael
    I have 2 directories each with a .htaccess file: html/.htaccess - There is a rewrite in this file to send almost everything to url.php RewriteCond %{REQUEST_URI} !(exported/?|\.(php|gif|jpe?g|png|css|js|pdf|doc|xml|ico))$ RewriteRule (.*)$ /url.php [L] and html/exported/.htaccess AuthType Basic AuthName "exported" AuthUserFile "/home/siteuser/.htpasswd" require valid-user If I remove html/exported/.htaccess the rewriting works fine and the exported directory can be access. If I remove html/.htaccess the authentication works fine. However when I have both .htaccess files exported/ is being rewritten to /url.php. Any ideas how I can prevent it?

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >