Search Results

Search found 32568 results on 1303 pages for 'linux pwns mac'.

Page 556/1303 | < Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >

  • Corosync - stopping the service crashes the server

    - by Antipop
    I am trying to set up a test cluster on a Xen Server with 2 paravirtualized CentOS 5.4 machines. I am using Pacemaker+Corosync, and following the instructions found at http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf and other sites. Anyway, when I try to manually stop the corosync service, about 80% of the times the whole VM locks up with the message "Waiting for corosync services to unload" and I am forced to shut the machine down manually. For the remaining 20%, the VM keeps responding and adds dots to the above message, but it won't actually stop the service. There aren't many resources on the internet about this particular error. Any ideas about this? Thanks in advance.

    Read the article

  • Configure New Server for .htaccess

    - by Phil T
    I have a new LAMP CENTOS 5 server I am setting up and trying to copy the configuration from another web server I have. I am stuck with what I think is a mod_rewrite problem. If I go to http://old-server.com/any_page_name.php it correctly routes through some handling code in index.php and shows me a graceful "Page Cannot Be Displayed" message. But if I go to http://new-server.com/any_page_name.php I get an ugly Apache 404 Not Found error message. I looked in both httpd.conf files and they both have only one reference to mod_rewrite. LoadModule rewrite_module modules/mod_rewrite.so So it seems like that should be fine. At the bottom of httpd.conf I have the code: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/html ServerName new-server.com ErrorLog logs/new-server.com-error_log CustomLog logs/new-server.com-access_log common </VirtualHost> Then in the root of /var/www/html I have the exact same .htaccess file that looks like this: RewriteEngine on Options +FollowSymlinks RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . index.php [L] ErrorDocument 404 /page-unavailable/ <files ~ "\.tpl$"> order deny,allow allow from none deny from all </files> So I don't see why the page load at old-server.com works fine while new-server.com doesn't route through index.php like I want it to do. Thanks.

    Read the article

  • Wrap app with dynamic libraries into one large static app

    - by progo
    I have an old program that kind of depends on older dynamic libraries. They tend to get upgraded easily with distro's updates. I figured that there would be a script with using ldd that would gather the libs needed and create one bigger, statically linked application that wouldn't break so easily. If I could do this, alot of older KDE libraries could be removed from my system and easen my life. Thanks!

    Read the article

  • Why does deleting from the command line take significantly less time than from a GUI?

    - by Jordan Plahn
    So this is probably the dumbest question you'll read today, but it's something I just wondered about as I was deleting a dozen or so images from my computer. With a quick rm -rf command on the directory's contents, all the images were gone in a snap. When I drag the same dozen or so images to a trash can/recycle ban, it takes sometimes 10 seconds or more. Now I'm sure some of it comes from the overhead of the GUI and such, and some of it may be the fact that the file still "exists" in some form if it's put into the recycle bin, but is there anything else that accounts for such a huge time disparity? Are "rm" and "delete" just such fundamentally different commands so I'm trying to compare apples and oranges? Enlighten me, please!

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • How do I map a network drive in Ubuntu? I want to save my Firefox downloads directly in the mapped n

    - by NJTechie
    I work in an environment wherein files are exchanged over email which are then processed into databases. In Windows, mapping a network drive and storing files directly to a folder in the network drive from Firefox/Chrome downloads is a breeze. How to achieve the same in Ubuntu? I don't see the SFTP'ed drive/directory as options in Firefox- Downloads setup. Thanks in advance!

    Read the article

  • Changed array composition, mdadm --detail still shows the old array size

    - by Prody
    I have a machine with 8 disks. I installed it with my hoster's install automation (it's OVH, I don't have physical access to it). The machine installed correctly, but it made an array that I wanted to change. It created a raid5 array across 5/8 disks and I've changed it to raid10 across 8 disks. I've done this by first --stopping the old array and then --creating the new array. It warned me that a previous array was there, but I chose to continue. So it created the array, spent 10ish hours syncing it and now that it's ready I get this strange behavior: When I fdisk p on it, I see the correct size. But when I mdadm --detail it I see the old array's size even tho I get the new composition and level. When I try to pvcreate on it, i get the old size again for some reason. Did I have to do something else? Did I miss something?

    Read the article

  • Apache shuts down from time to time

    - by Dugi
    I'm having trouble with my VPS as it keeps shutting apache down at least twice a day. The server is running on CentOS 6 with the latest apache. By shutting down I mean I have to go into SSH and type in this command in order to bring it up again: /sbin/service httpd start I'm not very good with servers and my host doesn't seem to have a nice customer service. Any help would be appreciated as these unexpected downtimes really know to kill one's mood.

    Read the article

  • Automating MySQL configuration with kickstart

    - by Nimmy Lebby
    I've been testing deployment for a website with some virtual servers. I have most of my deployment steps done via kickstart file (package installation and user creation). However, for MySQL I have to: Run mysql_secure_installation (sets up root password, deletes anonymous users, disallows root login remotely, removes test databases) ./ Then, create the website's databases and the database user. I'm not sure if this is possible in kickstart---especially the prompts in mysql_secure_installation. Perhaps someone has some suggestions or examples?

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • Do background processes get a SIGHUP when logging off?

    - by Massimo
    This is a followup to this question. I've run some more tests; looks like it really doesn't matter if this is done at the physical console or via SSH, neither does this happen only with SCP; I also tested it with cat /dev/zero > /dev/null. The behaviour is exactly the same: Start a process in the background using & (or put it in background after it's started using CTRL-Z and bg); this is done without using nohup. Log off. Log on again. The process is still there, running happily, and is now a direct child of init. I can confirm both SCP and CAT quits immediately if sent a SIGHUP; I tested this using kill -HUP. So, it really looks like SIGHUP is not sent upon logoff, at least to background processes (can't test with a foreground one for obvious reasons). This happened to me initially with the service console of VMware ESX 3.5 (which is based on RedHat), but I was able to replicate it exactly on CentOS 5.4. The question is, again: shouldn't a SIGHUP be sent to processes, even if they're running in background, upon logging off? Why is this not happening? Edit I checked with strace, as per Kyle's answer. As I was expecting, the process doesn't get any signal when logging off from the shell where it was launched. This happens both when using the server's console and via SSH.

    Read the article

  • Sniff packets using tcpdump

    - by denisk
    I have a completely noob question. I want to see all packets that come to my computer from particular site (google.com). So I start tcpdump sudo tcpdump -i eth0 host google.com and enter google.com in a browser and hit enter - nothing gets captured. I can't figure out why it happen. What do I do wrong? Edit It appeared that I was listening to the wrong interface. I had changed eth0 to any and it worked. It was ppp1 that needed listening. Thanks for your answers!

    Read the article

  • Prefork or Worker MPM for amazon xlarge server?

    - by Netismine
    I'm trying to measure would it be better to have prefork or worker mpm apache module for the server I'm working on, which is Amazon X-Large 15 GB memory 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each) and that will run a Magento website with about 50 active users at once. Site serves a lot of images and about 45 requests per page. Images sometimes hang, so it seems worker would be a better option? Thanks

    Read the article

  • Switch User in RedHat like XP

    - by rd42
    In our cluster, RedHat4 & 5 machines, if someone locks the computer and walks away no body can use it. Is there a feature in RedHat5, Gnome, KDE etc that would allow for the option of switching users at the lock screen, so more than one person can be logged in? Thanks, rd42

    Read the article

  • unknown reciever Esmtp

    - by Morteza Soltanabadiyan
    I found this in my log server: sm-mta[11410]: r9BKb6YY021119: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=2+07:24:18, xdelay=00:00:01, mailer=esmtp, pri=29911032, relay=mail1.mkuku.com. [58.22.50.83], dsn=4.0.0, stat=Deferred: Connection refused by mail1.mkuku.com. This message is repeated every 10-30 seconds with a different "to" address. What is this? Is my server being used to send spam?

    Read the article

  • Root SSH/SFTP Always 777

    - by Fluidbyte
    I have an Ubuntu serve that I'm connecting to via SFTP (and also an SSHFS mount locally). When I move a file to the server via the mount I need it to have permissions set to 777. I've added umask 000 to the .bashrc file at the advice of a friend and it doesn't appear to be working. Basically I'm working completely in a restricted folder and need the root to always leave the permissions open - wether I'm SSH'ed in or moving files to the server.

    Read the article

  • What virtualization solution should I try next, after hitting problems w/ VMWare Player in dual-netw

    - by Alex R
    I have been using VMWare Player 2.5 for a while (Ubuntu guest on Vista host, 32-bit). VMWare had worked great until now but then I hit a brick wall: Due to some reorganization of my home network, the host machine now has to use a wireless connection to reach the Internet, while the printer, fileserver, and other important stuff are attached to a local gigabit hub. I have tried several tricks, such as editing the .vmx file, changing settings in vmnetcfg, etc, but I'm still unable to get the virtual Ubuntu box to connect each of the two virtual NICs to different networks (I did get it to recognize two NICs, but both DHCP'd onto the gigabit LAN). So, I'm ready to dump vmware for something with a little more low-level control of network settings. Virtualization is such a crowded space, I could spend months evaluating every product out there. I'm hoping for a shortcut... Can anyone recommend the best VM for my situation described above? Thanks

    Read the article

  • Is ther an email client optimized for screen readers and accessiblity?

    - by Adolfo Fitoria
    Hi. I'm currently working on a project to help visually impaired people. We're planning to use Orca screen reader for gnome. Everything is doing great but there is a problem with email web clients the most popular ones(gmail, yahoo, hotmail) are not optimized for screen readers. Is there some kind of simple email client optimized for this? Need to be very simple and straight foward and support multiple users too.

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

  • When does `cron.daily` run?

    - by warren
    When do entries in cron.daily (and .weekly and .hourly) run, and is it configurable? I haven't found a definitive answer to this, and am hoping there is one. I'm running RHEL5 and CentOS 4, but for other distros/platforms would be great, too.

    Read the article

< Previous Page | 552 553 554 555 556 557 558 559 560 561 562 563  | Next Page >