Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 481/1328 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Root SSH/SFTP Always 777

    - by Fluidbyte
    I have an Ubuntu serve that I'm connecting to via SFTP (and also an SSHFS mount locally). When I move a file to the server via the mount I need it to have permissions set to 777. I've added umask 000 to the .bashrc file at the advice of a friend and it doesn't appear to be working. Basically I'm working completely in a restricted folder and need the root to always leave the permissions open - wether I'm SSH'ed in or moving files to the server.

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • My server keeps sending emails to [email protected]

    - by xtrimsky
    When I type mailq on my server, I get: BB523653A62 4025 Wed Jun 4 10:40:07 MAILER-DAEMON (delivery temporarily suspended: host p3smtpout.secureserver.net[208.109.80.54] refused to talk to me: 554 p3plsmtpout002.prod.phx3.secureserver.net : DED : AJeb1o0334uf1Y801 : DED : You've reached your daily relay quota - IP.ADDRESS) [email protected] B33AD653A4A 4025 Wed Jun 4 08:20:07 MAILER-DAEMON (delivery temporarily suspended: host p3smtpout.secureserver.net[208.109.80.54] refused to talk to me: 554 p3plsmtpout002.prod.phx3.secureserver.net : DED : AJeb1o0334uf1Y801 : DED : You've reached your daily relay quota - IP.ADDRESS) [email protected] B77DF653A63 4025 Wed Jun 4 10:50:07 MAILER-DAEMON (delivery temporarily suspended: host p3smtpout.secureserver.net[208.109.80.54] refused to talk to me: 554 p3plsmtpout001.prod.phx3.secureserver.net : DED : AJvF1o00L4uf1Y801 : DED : You've reached your daily relay quota - IP.ADDRESS) [email protected] B943C653A3C 4025 Wed Jun 4 06:40:07 MAILER-DAEMON (delivery temporarily suspended: host p3smtpout.secureserver.net[208.109.80.54] refused to talk to me: 554 p3plsmtpout001.prod.phx3.secureserver.net : DED : AKBv1o00P4uf1Y801 : DED : You've reached your daily relay quota - IP.ADDRESS) [email protected] (there is probably about 50 of these, and I've cleared the queue today), do you know where these could be coming from ? is it my server sending some logs to "hostmaster" ? I've replaced my actual domain, with "MYDOMAIN". How can I find what could be sending these emails ? The server has recently been hacked so I'm also a bit worried. Thanks!

    Read the article

  • Upgrade CentOS 5.2 to 5.3 -- but not 5.4

    - by Jeff Leyser
    Server currently running CentOS 5.2. Developers tell me they'd like the machine upgrade to CentOS 5.3 -- but not all the way to CentOS 5.4, as they haven't tested with 5.4 yet. I'm pretty sure a yum upgrade will put me at 5.4, as a yum check-update shows all sorts of 5.4 packages. So how do I move up to 5.3?

    Read the article

  • nginx virtual hosts are not working, all vhosts goes to the default one

    - by Adirael
    Hello, I just did a clean install of nginx + php-fpm on a VPS running Ubuntu 10.10, nginx is serving and PHP is working fine, but I'm not able to add vhosts to it. Well, I can add them, but only one works, the rest go to this first one. This is my first vhost, for host1: server { listen 80; server_name host1; access_log /var/log/nginx/host1.log; error_log /var/log/nginx/host1.error.log; location / { root /var/www/vhosts/host1/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host1/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } And the second one, for host2: server { listen 80; server_name host2; access_log /var/log/nginx/host2.log; error_log /var/log/nginx/host2.error.log; location / { root /var/www/vhosts/host2/; index index.html index.htm index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME /var/www/vhosts/host2/$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_index index.php; } } The problem is, when I go to http://host1 everything is fine, but on http://host2, it just shows host1! I don't have Apache installed and everything comes from repos. Any pointers?

    Read the article

  • GRUB2 not detecting OS on raid partitions

    - by sleeves
    I have recently added a drive to a system and have successfully raid'ed (RAID-1) the paritions, with the exception of the boot partition. I have it ready and mirrored, but can't get GRUB2 (update-grub) to find it. System: Ubuntu 11.04 Raid Metadata: 1.2 If I run update-grub, it finds the kernel images on the /dev/sda2 partition (present root) but not the images on /dev/md127. /dev/md127 is composed of "missing" and "/dev/sdb2". fdisk on /dev/sdb confirms that sdb2 is of type fd (raid autodetect) and is also flagged bootable. I have two things I want to do. Make the boot.cfg on /dev/sdb2 have a menu option to have the root be /dev/md127 Install grub onto /dev/md127 so the actual boot.cfg from there is being used. Thanks!

    Read the article

  • How can a driver change the kernel page table?

    - by Naruto
    I am encountering an issue with kernel memory. When my driver finishes running, the other processes in kernel fail to run, for example, I run ls, the command crashes the kernel with error "Corrupted page table" at a specified address. I do not know whether the page table of my driver relates to the page table of other process. How can my driver changes the page table of the other processes? And how the driver of a process relates to the kernel page table? As I know when the driver runs, it will be switched to kernel context. Kernel has its own page table and the driver has it own one. What is the relation among the kernel page table, the page table of my driver and the page table of the other processes when it runs in kernel context?

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • Looking for advice on using dd to backup a dual boot laptop.

    - by AvatarOfChronos
    My questions boils down to this. If I do "dd if=/dev/sda of=usbdrive" can anybody confirm that this will get everything including mbr/partition information/all four partitions and create a drive that I can swap with the failing internal drive without losing anything? If this is done while the computer is running will it still copy everything? At this point I'm afraid to shutdown the computer for fear of it never starting again. Secondly, how tolerant is dd of failing drives? Has anybody used it to recover a half dead drive before that can share any potential pitfalls? Did it get the data ok or is this going to be a hope for the best kind of situation? And lastly, If the usbdrive is larger than the failing internal drive I'll still be able to expand the partitions later so I'm not losing space? this last part seems silly to ask but with my current streak of bad luck I'll end up overwriting some magic bit and forever turning a 640gb hdd into a 500gb hdd. Also if anybody has a better solution to create a complete clone that gets everything I'm all for hearing about it. PostScript: I had been making periodic backups however when whatever miasma that killed the laptop struck it also got the NAS :( Post PostScript: both devices were on a UPS system.

    Read the article

  • MySQL writing to net

    - by seengee
    I have a server that has been running at high CPU load due to MySQL activity, when i run the command mysqladmin pr i often see a few queries with the state "writing to net". I had a look around and couldn't find much out about this other than the fact i read somewhere that this shouldnt be expected in usual MySQL activity. Any ideas what this could mean? Running MySQL 5.0.91-community on CentOS 4.8

    Read the article

  • using "touch" to create directories?

    - by user62367
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • using "touch" to create directories?

    - by user66732
    1) in the "A" directory: find . -type f a.txt 2) in the "B" directory: cat a.txt | while read FILENAMES; do touch "$FILENAMES"; done 3) Result: the 2) "creates the files" [i mean only with the same filename, but with 0 Byte size] ok. But if there are subdirs in the "A" directory, then the 2) can't create the files in the subdir, because there are no directories in it. Question: is there a way, that "touch" can create directories?

    Read the article

  • yum update with shared cache

    - by Sammitch
    We've got a big batch of RHEL6 machines that are due for patching, and for some reason the process here does not involve a local repo. I'm new here, I've asked why, ["it just didn't work"] and I don't have enough time to make it work before the window that's already scheduled. So the usual method is to install yum-downloadonly and run yum update --downloadonly --downloaddir=/mnt/cifs_share and then yum update /mnt/cifs_share/*.rpm which just does not look right to me since not all of these machines have the same set of installed packages. The method I tried today was mounting the share to /var/cache/yum/x86_64/6Server/rhel-x86_64-server-6/packages/ which worked, but then yum automatically deleted everything once it finished. I've looked over the yum man page, but I don't see any flag I can feed it to stop it from deleting everything, nor a flag like up2date's --tmpdir=/mnt/cifs_share. Can anyone out there help me kludge this together until I can get a local repository working?

    Read the article

  • Keep Uploaded Files in Sync Across Multiple Servers - LAMP

    - by Dfranc3373
    I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server. The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers? I do not want to store images directly in a database as our application is database intensive already. Is there a way to sync the servers across each other or is there something else I can do? Any help would be appreciated. Thanks

    Read the article

  • ubuntu usb that can be booted on mac and windows?

    - by An Original Alias
    I am a little frustrated having to use to separate USBs for booting on mac and windows. Is there a way that I can have the version for mac and the version for windows on one USB? or even better, one version that can be booted on both? I have heard of multibootiso, but I'm not entirely sure that it will work on imac. If needed, I am willing to use terminal to make this happen, even if its a long complicated process.

    Read the article

  • Setting per-directory umask using ACLs

    - by Yarin
    We want to mimic the behavior of a system-wide 002 umask on a certain directory foo, in order to ensure the following result: All sub-directories created underneath foo will have 775 permissions All files created underneath foo and subdirectories will have 664 permissions 1 and 2 will happen for files/dirs created by all users, including root, and all daemons. Assuming that ACL is enabled on our partition, this is the command we've come up with: setfacl -R -d -m mask:002 foo This seems to be working- I'm basically just looking for confirmation. Is this the most effective way to apply a per-directory umask with an ACL?

    Read the article

  • rsync without password, none of google (server fault) tutorials worked

    - by Jake Armstrong
    I need to use rsync for a daily backup operation and in the past (on different servers) I managed to just use a rsa key etc, but now none of google (serverfault) tutorials work at all. It keeps asking me for a password. I have webmin and ssh/root access to both servers. My steps: create a key on server 1 send key.pub to server 2 add key.pub to .ssh/authorized_keys chmod 700 .ssh/authorized_keys go back to server 1 and try rsync and it keep asking for password... rsync command: rsync -avz -e ssh file.txt root@server2:/root EDIT: well, I cleaned up everything and this time, instead of inserting a custom name to the key I used the standard one on server1. sent the .pub to server2 and it worked as a charm... So the answer is that server1's ssh wasn't even using the right key...

    Read the article

  • Locale misconfig. Debian

    - by JakeTheFish
    perl -e 'print "Hello\n";' perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = "en_US.UTF-8" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). Hello I'v tried to do export LC_ALL=en_US.UTF-8 export LANGUAGE=en_US.UTF-8 And it workis, till I log out. Is there any permanent solution?

    Read the article

  • Any good resources on setting up an ubuntu virtual machine for web development?

    - by Relequestual
    I'm currently on my placement year at uni with 4 months left. Before working at my current place, I have not used a nix environment for web development and have used WAMP. Over the past year I have found some very interesting new tech that requires a bit more than my shared hosting even to play with (eg node.js, RoR 3). At work we use a Virtual Machine for development, but that's all been set up and configured to match the live servers, and is managed with a Puppet server. Are there any really good resources for setting up and configuring an Ubuntu VM as a web server? Work currently uses Ubuntu so I would assume this is a good OS to use. I do of course know how to use google, but the noise ratio is just too big, so thought I'd ask here, as I know many of you will have a ton of bookmarks. Cheers.

    Read the article

  • NSD reply from unexpected source

    - by Ximik
    I have server with NSD. There are MAIN_IP and ADD_IP. When I try to get IP of my site from server I have right output dig @localhost my_site.com But when I try to make this from my PC, I have dig @my_ns_server.com my_site.com ;; reply from unexpected source: MAIN_IP#53, expected ADD_IP#53 (ADD_IP is IP of my_ns_server.com) What should I do? UPD: My interfaces conf auto eth2 allow-hotplug eth2 iface eth2 inet static address xxx.xxx.xxx.234 netmask 255.255.255.252 network xxx.xxx.xxx.232 broadcast xxx.xxx.xxx.235 gateway xxx.xxx.xxx.233 dns-nameservers MY_ISP_IP dns-search MY_ISP_DOMAIN auto eth2:0 iface eth2:0 inet static address xxx.xxx.xxx.124 netmask 255.255.255.0 xxx.xxx.xxx is the same for all IPs

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Cygwin file and directory user and group

    - by dvanaria
    I use Cygwin as my main development environment on both my home and work computers. In order to share files between the two computers, I use Dropbox, which is installed in the following folder on both computers: c:\cygwin\home\dvanaria\dropbox Everything works great, except for one thing. When I'm working on my home computer and do an ls -l on any directory, all the files show up as owned by dvanaria of group Users. But when I work from my work computer, an ls -l shows all files as being owned by Administrators and of group Domain Users. I know Cygwin uses some kind of mapping between Windows users and permissions to the /etc/passwd file. But to be honest I have no idea how this file works or how it maps to Windows under Cygwin. Could anyone help figure this out? The main problem is that I can't edit any files when using my work computer, only read them.

    Read the article

  • Software RAID 1 broken, how do I fix this?

    - by Edward
    I'm running CentOS 6 x86_64. There is a software RAID 1 being used on the two internal 80GB drives. I got the following e-mail sent to me: A DegradedArray event had been detected on md device /dev/md1. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md0 : active raid1 sda1[0] 511988 blocks super 1.0 [2/1] [U_] md1 : active raid1 sda2[0] 8190968 blocks super 1.1 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk md4 : active raid1 sdc1[0] sdb1[1] 1953512400 blocks super 1.2 [2/2] [UU] md3 : active raid1 sdd5[1] sda5[0] 61224892 blocks super 1.1 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md2 : active raid1 sdd3[1] sda3[0] 8190968 blocks super 1.1 [2/2] [UU] unused devices: <none> The system appears to have booted fine and is working. The two drives' content did not change at all. I only removed and reinstalled them while I was booted on the CentOS Live DVD. How do I get the array working again?

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >