Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 331/1664 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Centos: installing SVN tells I don't have perl 1.17. I have installed 5.8

    - by Emerson
    I'm trying to install SVN on a CentOS virtual machine. I used the command that the CentOS wiki tells: http://wiki.centos.org/HowTos/Subversion yum install mod_dav_svn subversion It gives me a few errors: --> Finished Dependency Resolution mod_dav_svn-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) subversion-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) The thing is that I have Perl 5.8 installed: root@server [~]# rpm -q perl perl-5.8.8-32.el5_5.2 I also don't know why it tells httpd-mmn isn't installed. I have apache installed for sure. From what I read here, it seems I would need to recompile apache. www.sitepoint.com /forums/showthread.php?t=485683 Any ideas? Update: I also tried to install subversion via WHM (11.28.35) and it gives me the same error. By the way, WHM it says: CENTOS 5.5 i686 virtuozzo on server

    Read the article

  • Multiple urls to 1 website with a wild card ssl.

    - by dagda1
    Hi, At the moment, we have 27 single sites in IIS6, all with their own urls, all with the same subdomain, e.g. https://company1.mycompany.com https://company2.mycompany.com etc., etc. To further complicate things, there is 1 wild card certificate which deals with the subdomain *.mycompany.com and is assigned to each website. All these websites run under the same codebase. We want to consolidate all these websites into 1 website. Are there any issues with having a large number of host headers running under 1 IIS6 site or is there a better way of configuring the site? Thanks Paul

    Read the article

  • using monit to restart custom daemon

    - by Jenis Sam
    I wrote a php daemon using system daemon pear class. How do I use monit to restart it when it fails? I have the following code in my monit config file: check process merge with pidfile /var/www/merge/merge.pid group 1000 start program = "/etc/init.d/merge start" stop program = "/etc/init.d/merge stop" IF CHANGED PID then restart My goal is solely if the daemon fails (stops running because of an error), I want monit to make it start running again.

    Read the article

  • Jailkit - allowing use of Java/Python

    - by James hooker
    I'm looking to allow Jailed users to use JAVA (JRE) to execute their own Java files, similarly with Python to execute py scripts. I've tried adding the JAVA binary (/usr/bin/java) to a jail using the following jk_cp /path/to/jail /usr/bin/java This seems to copy some of the libraries across, aswell as the Java binary itself, however the jailed user is still unable to execute Java. It complains first of all of a missing lib called libsli.so. I copied that across and it then complains about libjava.so which I proceeded to copy across to, to no avail. Does anyone have experience with enabling the execution of Java within a jailed environment? This is under Ubuntu 10.04 64-bit, with Jailkit 2.11.

    Read the article

  • Adobe Volume License for Indesign Digital Download Location?

    - by elistp
    We recently purchased a volume license for Adobe Indesign from Dell. We received an e-mail for the order that contains the serial #. However, there is no information on how to obtain Adobe Indesign from the Adobe licensing portal. This is our first time dealing with Adobe volume licensing so I'm a bit lost as to what we're suppose to do. I've googled around a little and found an Adobe License Portal but I do not have access to it. Does anyone with experience concerning Adobe volume licensing have any idea what we're suppose to do to get a download of our purchase?

    Read the article

  • How to create a snapshot volume to a remote server using kvm?

    - by Purres
    I want to backup a few virtual machines to a backup server. Here're the backup steps. suspend a virtual machine create a snapshot of the virtual machine using lvcreate -s resume a virtual machine dd if=/virtual_machine_path | lzop > /temp/backup.lzo rsync /temp/backup.lzo -e "ssh " 1.2.3.4:/backup_path/ However, the hypervisor server doesn't have enough hard disk space to create a snapshot in step 2. Is there a way to create a logical volume snapshot to a remote server?

    Read the article

  • Capistrano deploying to different servers with different authentication methods

    - by marimaf
    I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS) I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this: ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")] ssh_options[:forward_agent] = true I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case. I got to this tutorial, but couldn't find what I need. I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository Please any help os welcome. Thanks a lot

    Read the article

  • Magento Apache Config & Memory Issues

    - by cheshirepine
    I have a Magento installation on a VPS that is giving me a headache. This particular VPS has a reasonable spec - 2gb Memory and 50gb storage. It runs a single domain, with a single Magento install - and nothing else. About 5 months ago we started having issues. Every so often (about once every 2 or 3 weeks) the VPS would crash - all processes stopped and the only way to restart the container is via Virtuozzo. Now, however its 2 or 3 times a week. My VPS hosts confirm I am breaching the 2gb memory limit, at which point all VPS processes are killed to stop it bringing the entire node down. I have not made any config changes to it at all - I was running New Relic on it for a short while, but have removed that in case it was contributing to the issues. I can see nothing in the logs which indicates an issue and we have no CRON jobs running at the time the crashes happen. The site generates steady, but not huge amounts of traffic (averaging usually less than 100 visits per day) Is there anything in particular I should have done to the Apache or PHP configs to help? Im not a massivley experienced Apache admin, but know more than enough to solve most problems... Failing that, any other ideas that might help? Can't afford for this site to be down this much.

    Read the article

  • AD server within another network - DNS issues

    - by Harry Muscle
    Here's a quick summary of the environment I support: we have a domain (domain A) that has about 20 client computers. The domain server for this domain and all the clients sit within the network infrastructure of a larger domain (domain B). All the computers get their network settings via DHCP from domain B's servers. I have no control and am unable to make changes to anything to do with domain B. The problem I have is that currently in order for my domain's (domain A) clients to be able to resolve the domain server and the shares on it they have their DNS server IP address set to domain A's domain server (via the default GPO). Unfortunately when a laptop (windows and mac) gets taken home, they are still looking for the domain server as their DNS server and obviously can't access the internet correctly outside of our environment. Ideally I need a solution where the machines use domain A's domain server as their DNS when inside the office and use what ever DNS server DHCP gives them when they are outside the office. However, since I have no control over the office DHCP server, I'm not sure how this can be accomplished. Any help and advice that anyone can offer is highly appreciated. Thanks, Harry P.S. The solution I'm trying to find needs to require no involvement from the user.

    Read the article

  • xampp apache on windows 7 returns http header only

    - by bumperbox
    i am having issues with xampp running on windows 7 RC32 i type in a localhost and get a header back only, no page content somedays it works fine, other days i can't get it to work after multiple attempts, reboot or otherwise the request doesn't even get put into the acccess log which seems unusual here is the log file at startup incase that helps any ideas ?? [Wed Sep 09 12:27:08 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:08 2009] [notice] Digest: done [Wed Sep 09 12:27:09 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:09 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:09 2009] [notice] Parent: Created child process 2500 [Wed Sep 09 12:27:10 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:10 2009] [notice] Digest: done [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Child process is running [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Acquired the start mutex. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting 250 worker threads. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 443. [Wed Sep 09 12:27:11 2009] [notice] Child 2500: Starting thread to listen on port 80. [Wed Sep 09 12:27:15 2009] [notice] Parent: child process exited with status 255 -- Restarting. [Wed Sep 09 12:27:15 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:15 2009] [notice] Digest: done [Wed Sep 09 12:27:16 2009] [notice] Apache/2.2.11 (Win32) DAV/2 mod_ssl/2.2.11 OpenSSL/0.9.8i PHP/5.2.9 configured -- resuming normal operations [Wed Sep 09 12:27:16 2009] [notice] Server built: Dec 10 2008 00:10:06 [Wed Sep 09 12:27:16 2009] [notice] Parent: Created child process 3252 [Wed Sep 09 12:27:17 2009] [notice] Digest: generating secret for digest authentication ... [Wed Sep 09 12:27:17 2009] [notice] Digest: done [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Child process is running [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Acquired the start mutex. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting 250 worker threads. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 443. [Wed Sep 09 12:27:18 2009] [notice] Child 3252: Starting thread to listen on port 80.

    Read the article

  • Linux process management

    - by tanascius
    Hello, I started a long running background-process (dd with /etc/urandom) in my ssh console. Later I had to disconnect. When I logged in, again (this time directly, without ssh), the process still seemed to to run. I am not sure what happened - I did not use disown. When I logged in later, the process was not listed in top at first, but after a while it reclaimed a high CPU percentage, as I expected. So I assume dd is still running. Now, I'd like to see the progress. I use kill -USR1 <pid> but nothing is printed. Is there any way to get the output again?

    Read the article

  • saslauthd using too much memory

    - by Brian Armstrong
    Woke up today to see my site slow/unresponsive. Pulled up top and it looks like a ton of saslauthd processes have spun up using about 64m of RAM each, causing the machine to enter swap space. I've never seen this many used on there. top - 16:54:13 up 85 days, 11:48, 1 user, load average: 0.32, 0.50, 0.38 Tasks: 143 total, 1 running, 142 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 97.3%id, 0.2%wa, 0.0%hi, 0.0%si, 1.4%st Mem: 1048796k total, 1025904k used, 22892k free, 14032k buffers Swap: 2097144k total, 332460k used, 1764684k free, 194348k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 848 admin 20 0 263m 115m 4840 S 0 11.3 5:02.91 ruby 906 admin 20 0 265m 113m 4828 S 0 11.1 5:37.24 ruby 30484 admin 20 0 248m 91m 4256 S 6 9.0 219:02.30 delayed_job 4075 root 20 0 160m 65m 952 S 0 6.4 0:24.22 saslauthd 4080 root 20 0 162m 64m 936 S 0 6.3 0:24.48 saslauthd 4079 root 20 0 162m 64m 936 S 0 6.3 0:24.70 saslauthd 4078 root 20 0 164m 63m 936 S 0 6.2 0:24.66 saslauthd 4077 root 20 0 163m 62m 936 S 0 6.1 0:24.66 saslauthd 3718 mysql 20 0 312m 52m 3588 S 1 5.1 3499:40 mysqld 699 root 20 0 72744 7640 2164 S 0 0.7 0:00.50 ruby 15701 postfix 20 0 106m 5712 4164 S 1 0.5 0:00.50 smtpd 15702 postfix 20 0 52444 3252 2452 S 1 0.3 0:00.06 cleanup 4062 postfix 20 0 41884 3104 1788 S 0 0.3 125:26.01 qmgr 15683 root 20 0 51504 2780 2180 S 0 0.3 0:00.04 sshd 14595 postfix 20 0 52308 2548 2304 S 1 0.2 0:24.60 proxymap 15483 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.38 smtp 15486 postfix 20 0 43380 2544 1992 S 0 0.2 0:00.36 smtp 15488 postfix 20 0 43380 2540 1992 S 0 0.2 0:00.38 smtp 15485 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.36 smtp 15489 postfix 20 0 43380 2532 1984 S 0 0.2 0:00.40 smtp Wasn't sure what Saslauthd is, Google says it handles plantext authentication. The machine has been sending a lot of email through postfix, so this could be related. Anyone know why so many may have spun up? Are they safe to kill? Thanks!

    Read the article

  • Force SSL on one page via .htaccess without looping

    - by Will Martin
    Okay, I have this code: RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/borrowing/ill/request\.php$ RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI} [R,L] The way I would expect this to work is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule does not match, because HTTPS is now on. The way it actually works is: A request for /borrowing/ill/request.php comes in on HTTP. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS. The rule matches. The server redirects to HTTPS ... And so on. I know that the second condition (matching the file name) is working, because the redirect loop only hits that specific page. The question is, why isn't the switch to HTTPS causing the first condition to not match? EDIT: I put the exact same .htaccess rules into a test area on another server -- same file and path info. And they worked just fine. There's got to be something wrong with the server configuration elsewhere.

    Read the article

  • Using Amazon S3 for multiple remote data site uploads, securely

    - by Aitch
    I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case. We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day. I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be. The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know. Let's say I write or use a script like s3cmd/s3sync to potentially push up the files. Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?) Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? ) having a bucket per remote machine does not seem feasible due to bucket limits? I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct? I hope my inexperience has not blinded me to some other solution that might be suggested! I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong... I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible? TIA, Aitch

    Read the article

  • Not able to connect to a mac client from a windows machine

    - by Manish
    I have a Server.exe file which I use to connect to a mac.(I am fairly confident that server.exe is not buggy ).When i try to do this I get this often cited error "No connection could be made because the target machine actively refused it " I did search some existing questions about this on the forum and it looked like this might be a firewall issue.FWIW I dont have any firewall set on my mac (client) and on my server machine (Windows 7 64 bit ) under the firewall settings I have :- Incoming connections : Block all connections to programs that are not on the list of allowed programs. Active Domain Networks: Same domain as the one which my client is on. Windows Firewire State: Off. Do you think i need to change something here?Can someone help me with next steps?

    Read the article

  • Automatically Snapshoting AWS instances (or other back up strategy)

    - by user1172468
    I just realized that my aws instance count has risen into the double digits. I'm currently backing portions of my folders and dbs and moving them off to a backup instance. What I think I should be doing is taking a snapshot of the instances (automatically) and persisting them on S3 so I have a running 7 day collection of daily backups. There is a question asking the same thing here, however the answers don't go into depth. So the closest answer seems to be: use a cron job to snapshot the instance. So do I run the cron job on the instance itself? or do I have a micro instance to run these snapshots? Could I get an example script or the command for say a linux flavor? what software must I have installed to get this to run? Thanks.

    Read the article

  • TCP Zero Window with no corresponding Window Update

    - by Gandalf
    I am trying to debug a network issue and am using Wireshark and tcpdump to grab packets from my server. I have one client application that is grabbing all my available connections and then holding them, trying to send A LOT of data and essentially causing an unintentional DOS attack. While debugging I notice that I see my server sending "Window Closed" and "Zero Window" TCP packets - but never sending any "Window Update" packets. I am guessing this is why the client never lets go of the connections (it still has more data to send and is waiting). Has anyone ever seen this type of behavior before? Let's not get into the reasons why I haven't set up an iptables rule to limit concurrent connections (yeah I know). I also recently changed the MTU from 1500 to 9000 - could this have such a negative effect? (Linux) Thanks.

    Read the article

  • Mysql install and remove issues

    - by Matt
    I installed mysql on ubuntu server and i dont know what went wrong...it didnt install a mysql root user so i tried to uninstall and start over and now i cant unistall i tried this apt-get remove php5-mysql apt-get remove mysql-server mysql-client apt-get autoremove but when i do ps aux | grep mysql root 6066 0.0 0.0 1772 540 pts/1 S 03:21 0:00 /bin/sh /usr/bin/mysqld_safe mysql 7065 0.0 0.6 58936 11900 pts/1 Sl 03:33 0:00 /usr/sbin/mysqld -- basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid -- socket=/var/run/mysqld/mysqld.sock --port=3306 root 7066 0.0 0.0 2956 688 pts/1 S 03:33 0:00 logger -t mysqld -p daemon.error root 22804 0.0 0.0 3056 780 pts/1 R+ 04:14 0:00 grep mysql so i killed the processes and then tried to reinstall like this apt-get -f install sudo apt-get install mysql-server mysql-client sudo mysqladmin -u root -h localhost password 'root' but i get this mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' im confused..i keep installing and uninstalling mysql and the same result..any ideas

    Read the article

  • (svn+ssh) getting bash to load my PATH over SSH

    - by Eli Bendersky
    This problem comes up with me trying to make svnserve (Subversion server) available on a server through SSH. I compiled SVN and installed it in $HOME/bin. Local access to it (not through SSH) works fine. Connections to svn+ssh fail due to: bash: svnserve: command not found Debugging this, I've found that: ssh user@server "which svnserve" says: which: no svnserve in (/usr/bin:/bin) This is strange, because I've updated the path to $HOME/bin in my .bashrc, and also added it in ~/.ssh/environment. However, it seems like the SSH doesn't read it. Although when I run: ssh user@server "echo $PATH" It does print my updated path! What's going on here? How can I make SSH find my svnserve? Thanks in advance

    Read the article

  • Application losing Printer within Terminal Services for remote users

    - by Richard
    Question: What I need to do is have a permanent link to a printer, normally only accessible through Terminal Services (Printer Redirect), to allow Sage Line 50 layouts to see that printer persistently, even after users have disconnected and reconnected to the Terminal Services session? Although the printer is accessible each time a user connects to the Sage Server via Terminal Services, it is given a different session number and therefore the Sage Layout sees it as a different printer. History behind question: Users using Terminal Services connecting to a Sage Server on a different site Using Sage Line 50 v 15 on that Server Users want to print invoices (sage layouts) locally Sage Server cannot see the users local printers, to get around this user uses the Print redirect features of Terminal Services The individual reports can be edited to point to a specific printer by default. This means the user just has to select an invoice and click print, then select the layout/report wanted and it auto prints that invoice to the default printer specified. The problem occurs because the layouts are edited to point to the users local printer "Ricoh 1018d (session#)", note the "(session#)" as this is the users local printer being redirected through the terminal services session. Users are able to print using the sage layouts once the default printer is setup within the layout and saved, but as soon as the users disconnects from the Terminal Services session and then reconnect in the morning go to print, it has lost the connection to that printer. I understand why its failed, because that the printer is on a per session basis and the layout would not be able to hold on to the connection from a previous session. Thanks in advance for any assistance...

    Read the article

  • ESXI guests not using available CPU resources

    - by Alan M
    I have a VMWare ESXI 5.0.0 (it's a bit old, I know) host with three guest VMs on it. For reasons unknown, the guests will not use much of the availble CPU resources. I have all three guests in a single pool, with the hosts all configured to use the same amount of resource shares, so they're basically 33% each. The three guests are basically identically configured as far as their VM resources go. So the problem is, even when the guests are performing what should be very 'busy' activitites, such as at bootup, the actual host CPU consumed is something tiny, like 33mhz, when seen via vSphere console's "Virtual Machines" tab when viewing properties for the pool. And of course, the performance of the guest VMs is terrible. The host has plenty of CPU to spare. I've tried tinkering with individual guest VM resource settings; cranking up the reservation, etc. No matter. The guests just refuse to make use of the abundant CPU available to them, and insist on using a sliver of the available resources. Any suggestions? Update after reading various comments below Per the suggestions below, I did remove the guests from the application pool; this didn't make any difference. I do understand that the guests are not going to consume resources they do not need. I have tried to do a remote perfmon on the guest which is experiencing long boot times, but I cannot connect to the guest remotely with perfmon (guest is w2k8r2 server). Host graphs for CPU, Mem, Disk are basically flatlining; very little demand. Same is true for the guest stats; while the guest itself seems to be crawling, the guest resource graphing shows very little activity across CPU, Mem, Disk. Host is a Dell PowerEdge 2900, has 2 physical CPU,20gb RAM. (it's a test/dev environment using surplus gear) Guest1 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest2 has: VM ver. 7, 2vCPU, 4gb RAM, 140gb storage which lives on a RAID-5 array on the host. Guest3 has: VM ver. 7, 1vCPU, 2gb RAM, 2tb storage which lives on a RAID-5 ISCSI NAS box Perhaps I am making a false assumption that if a guest has a demand for CPU (e.g. Windows Task Manager shows 100% CPU), the host would supply the guest with more CPU (mem, disk) on demand. Another Update After checking the stats, it would appear that the host is indeed not busy at all, neither is the guest. I believe I have a good idea on the issue, though; a messed-up VMWare Tools install. The guest has VMware Tools on it, but the host says it does not. VMWare Tools refuses to uninstall, refuses to be upgraded, refuses to be recognized. While I cannot say with authority, this would appear to be something worth investigation. I do not know the origin of the guest itself, nor the specifics on the original VMWare Tools install. Following various bits of googling, I did come up with a few suggestions that went nowhere. To that end, I was going to delete this question, but was prompted not to do so since so many folks answered. My suspicion right now is; the problem truly is the guest; the guest is not making a demand on the host, and as a natural result, the host is treating the guest accordingly. My Final Update I am 99% certain the guest VM had something fundamentally wrong with it re VMWare Tools. I created a clone of a different VM with a near-identical OS config, but a properly working install of VMWare tools. The guest runs just great, and takes up it's allotment of resources when it needs to; e.g. it eats up about 850mhz CPU during startup, then ticks down to idle once the guest OS is stable.

    Read the article

  • I am using apache mod rewrie to redirect http to https but now cannot connect to localhost/phpmyadmin

    - by user1787331
    here is my /etc/apache2/sites-enabled/000-default <VirtualHost *:80> ServerAdmin [email protected] RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://mysite.com DocumentRoot /var/www/http <Directory /> Options None AllowOverride None </Directory> <Directory /var/www/http> Options -Indexes -FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Not sure how to fix this. Any thoughts?

    Read the article

  • FileZilla Server Configuration Problems

    - by LiamB
    I've set-up FileZilla server a Windows 2008 Machine, I then created the user, password and added a share folder which I set to Home Directory. I then connect to the server from the client computer Status: Connecting to {IP} Status: Connection established, waiting for welcome message... Response: 220-Welcome To {NAME} FTP Response: 220 {DOMAIN} Command: USER {USER} Response: 331 Password required for {USER} Command: PASS ********* Response: 230 Logged on Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode ({}DATA) Command: MLSD The connection works fine, however no remote directory is selected, it shows as "/" however uploading any file fails. Any suggestions on how to debug this more?

    Read the article

  • Booting VM Directly from iSCSI (or FC) in Hyper-V R2

    - by tony roth
    I have a server that san boots that I want to p2v. I have many options disk2vhd, scvmm etc but I was thinking about cloning the lun (flexclone, netapp) presenting it to my hyper-v r2 server. Within the hv manager do a create new disk then have it copy the cloned lun to a vhd file. Then do the bcdedit\bootsect stuff to it. Should work right? I'm also curious if anybodys booting vhd's that are on bootable luns? I've booted native vhd's just fine was just curious about the running them off a bootable lun. I think that this has quite a few advantages like instant p2v etc.. any thoughts on this? hmm dang as I was typing this I realized that I should not use the hv manager new disk copy routine, I should just disk2vhd the mounted lun. This has advantages in that it should be a lot faster!! discovered that disk2vhd may be flaky, crashed the first time I ran it! thanks

    Read the article

  • Grant Sharepoint Access to all employees

    - by Satish
    What's the easiest way to grant access to all the employees of our company to sharepoint portal. There are some general sites which all employees have read access. So Do I have to create an AD group for all employees and add to the site or is there some better way to manage this?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >