Search Results

Search found 13128 results on 526 pages for 'square root'.

Page 335/526 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • Why's SMC failing on startup?

    - by Brian Knoblauch
    Trying to remove a user from one of our servers, but I seem to be thwarted at every turn... SMC refuses to load the user list (failing with a NoClassDefFoundError in the listAll method of UserContent). vipw just returns with "vipw: /etc/passwd file busy". I'm the only user on the system at the moment (it's our backup SRSS box), and both of these fail even right after a reboot. I don't have console access at the moment either unfortunately (or I would try single user mode). Of course, even if init mode S worked and let me do this one task, it doesn't solve the root problem. Ideas?

    Read the article

  • How to start a service at boot time in ubuntu 12.04, run as a different user?

    - by Alex
    I have a server ClueReleaseManager which I have installed on a Ubuntu 12.04 system from a separate user (named pypi), and I want to be able to start this server at startup. I already have tried to create a simple bash script with some commands (login as user pypi, use a virtual python environment, start the server), but this does not work properly. Either the terminal crashes or when I try to ask the status of the service it is started and I am logged in as user pypi ...? So, here the question: What are the steps to take to make sure the ClueReleaseManager service properly starts up on boot time, and which I can control (start/stop/..) during runtime, while the service is running from a user pypi? Additional information and constraints: I want to do this as simple as possible Without any other packages/programs to be installed I am not familiar with the Ubuntu 12.04 init structure All the information I found on the web is very sparse, confusing, incorrect or does not apply to my case of running a service as a different user from root.

    Read the article

  • Only allow the POST method for a specific file in a directory

    - by Dave Chen
    I have one file that should only be accessible via the POST method. /var/www/folder/index.php The document root is /var/www/ and index.php is nested inside a folder. My configurations are as follows: <Directory "/var/www/folder"> <Files "index.php"> order deny,allow Allow from all <LimitExcept POST> Deny from all </LimitExcept> </Files> </Directory> I visit my server at 127.0.0.1/folder but I can GET and POST the file just like normal. I've also tried reversing the order, order allow,deny, require, limitexcept and limit. How can I only allow POST requests to be processed by one file in a folder?

    Read the article

  • Can't add Samba users in Ubuntu

    - by petersohn
    I am using (K)Ubuntu 10.10, and I'm trying to set up Samba shares. When I try to add a Samba user in the KDE samba configuration, exit the configuration dialog, then enter it again, I see that the user is not added. Then I tried it using the command line (running as root): smbpasswd -a peet 'peet' is my normal user name. It asks for a password, then does something on my hard drive, but I can see no password file created in /etc/samba, and neither does the date of my smb.conf file change. I also don't see the samba user when I open the samba configuration dialog.

    Read the article

  • Why are default spamassassin rules not being applied to emails we generate?

    - by Chance
    My company uses a standalone spam-assassin install to test marketing emails, however, mail originating from us does not seem to run the full gamut of test. For example, Spam assassin has a default rule that flags messages that contain the phrase Dear [Something], and it properly flags spam that I feed it.It does not, however, apply that same rule to in house email I send it. Is it possible that spam assassin has white-listed us somehow, perhaps because the mail originates in the same domain as the server or receiver? I believe most of the recent spamassassin questions have been mine, so thanks for bearing with me as I figure this out! Chance EDIT Details on our SA setup: We are piping the emails into the CL with spamc -R < test_email.eml Identical results testing as root or a user, no user_prefs file

    Read the article

  • Determine process using a port, without sudo

    - by pat
    I'd like to find out which process (in particular, the process id) is using a given port. The one catch is, I don't want to use sudo, nor am I logged in as root. The processes I want this to work for are run by the same user that I want to find the process id - so I would have thought this was simple. Both lsof and netstat won't tell me the process id unless I run them using sudo - they will tell me that the port is being used though. As some extra context - I have various apps all connecting via SSH to a server I manage, and creating reverse port forwards. Once those are set up, my server does some processing using the forwarded port, and then the connection can be killed. If I can map specific ports (each app has their own) to processes, this is a simple script. Any suggestions? This is on an Ubuntu box, by the way - but I'm guessing any solution will be standard across most Linux distros.

    Read the article

  • Trying to install SawMill and getting the following error:

    - by Itai Ganot
    [root@sawmill sawmill]# ./sawmill ./sawmill: error while loading shared libraries: libldap-2.3.so.0: cannot open shared object file: No such file or directory Using yum provides libldap_r-2.3.so.0 i found that the package which includes this file is: compat-openldap-2.3.43-2.el6.i686 . After installing it i still get the error. If i use locate, i can find the file in /usr/lib, so I tried to create a symbolic link to the file from /usr/lib to /usr/lib64 but i still get the same error. I also tried setting LD_LIBRARY_PATH=/usr/lib/ and LD_LIBRARY_PATH=/usr/lib64 but it doesn't allow me to run the sawmill installation script. Anyone knows how to solve this issue?

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • How to efficiently dump a huge MySQL innodb database?

    - by Jagbir
    I got an Ubuntu 10.04 production MySQL database server where total size of database is 260 GB while size of root partition is itself 300 GB where DB is stored, essentially means around 96% of / is full and there's no space left for storing dump/backup etc. No other disk is attached to server as of now. My task is to migrate this database to other server sitting in different datacenter. Question is how to do that efficiently with minimum downtime? I'm thinking in line of: Request to attach an extra drive to server and take a dump in that drive. Transfer dump to new server, restore it and make new server slave of existing one to keep data in sync When migration is needed, break replication, update slave config to accept read/write requests and make old server read-only so it won't entertain any write requests and tell app developers to update there config with new IP address for db. What's your suggestions to improve this or any alternate better approach for this task?

    Read the article

  • OpenVPN - client-to-client traffic working in one direction but not the other

    - by Pawz
    I have the following VPN configuration: +------------+ +------------+ +------------+ | outpost |----------------| kino |----------------| guchuko | +------------+ +------------+ +------------+ OS: FreeBSD 6.2 OS: Gentoo 2.6.32 OS: Gentoo 2.6.33.3 Keyname: client3 Keyname: server Keyname: client1 eth0: 10.0.1.254 eth0: 203.x.x.x eth0: 192.168.0.6 tun0: 192.168.150.18 tun0: 192.168.150.1 tun0: 192.168.150.10 P-t-P: 192.166.150.17 P-t-P: 192.168.150.2 P-t-P: 192.168.150.9 Kino is the server and has client-to-client enabled. I am using "fragment 1400" and "mssfix" on all three machines. An mtu-test on both connections is successful. All three machines have ip forwarding enabled, by this on the gentoo boxes: net.ipv4.conf.all.forwarding = 1 And this on the FreeBSD box: net.inet.ip.forwarding: 1 In the server's "ccd" directory is the following files: client1: iroute 192.168.0.0 255.255.255.0 client3: iroute 10.0.1.0 255.255.255.0 The server config has these routes configured: push "route 192.168.0.0 255.255.255.0" push "route 10.0.1.0 255.255.255.0" route 192.168.0.0 255.255.255.0 route 10.0.1.0 255.255.255.0 Kino's routing table looks like this: 192.168.150.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.0.0 192.168.150.2 255.255.255.0 UG 0 0 0 tun0 192.168.150.2 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Outpost's like this: 192.168.150 192.168.150.17 UGS 0 17 tun0 192.168.0 192.168.150.17 UGS 0 2 tun0 192.168.150.17 192.168.150.18 UH 3 0 tun0 And Guchuko's like this: 192.168.150.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 10.0.1.0 192.168.150.9 255.255.255.0 UG 0 0 0 tun0 192.168.150.9 0.0.0.0 255.255.255.255 UH 0 0 0 tun0 Now, the tests. Pings from Guchuko to Outpost's LAN IP work OK, as does the reverse - pings from Outpost to Guchuko's LAN IP. However... Pings from Outpost, to a machine on Guchuko's LAN work fine: .(( root@outpost )). (( 06:39 PM )) :: ~ :: # ping 192.168.0.3 PING 192.168.0.3 (192.168.0.3): 56 data bytes 64 bytes from 192.168.0.3: icmp_seq=0 ttl=63 time=462.641 ms 64 bytes from 192.168.0.3: icmp_seq=1 ttl=63 time=557.909 ms But a ping from Guchuko, to a machine on Outpost's LAN does not: .(( root@guchuko )). (( 06:43 PM )) :: ~ :: # ping 10.0.1.253 PING 10.0.1.253 (10.0.1.253) 56(84) bytes of data. --- 10.0.1.253 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms Guchuko's tcpdump of tun0 shows: 18:46:27.716931 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 1, length 64 18:46:28.716715 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 2, length 64 18:46:29.716714 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 Outpost's tcpdump on tun0 shows: 18:44:00.333341 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 3, length 64 18:44:01.334073 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 4, length 64 18:44:02.331849 IP 192.168.150.10 > 10.0.1.253: ICMP echo request, id 63009, seq 5, length 64 So Outpost is receiving the ICMP request destined for the machine on it's subnet, but appears not be forwarding it. Outpost has gateway_enable="YES" in its rc.conf which correctly sets net.inet.ip.forwarding to 1 as mentioned earlier. As far as I know, that's all that's required to make a FreeBSD box forward packets between interfaces. Is there something else I could be forgetting ? FWIW, pinging 10.0.1.253 from Kino has the same result - the traffic does not get forwarded. UPDATE: I've found that I can only ping certain IP's on Guchuko's LAN from Outpost. From Outpost I can ping 192.168.0.3 and 192.168.0.2, but 192.168.99 and 192.168.0.4 are unreachable. The same tcpdump behavior can be seen. I think this means the problem can't be due to ipforwarding or routing, because Outpost can reach SOME hosts on Guchuko's LAN but not others and likewise, Guchuko can reach two hosts on Outpost's LAN, but not others. This baffles me.

    Read the article

  • How to backup MySQL (mysqldump) when Memcached installed?

    - by cewebugil
    The server OS is CentOS, with Memcached installed Before Memcached installed, I use mysqldump -u root -p --lock-tables --add-locks --disable-keys --skip-extended-insert --quick wcraze > /var/backup/backup.sql But now, Memcached has been installed. According to Wikipedia; When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. This means new data entry is not directly saved in MySQL, but saved in Memcached instead, until limit_maxbytes is full, the least accessed data will be saved in MySQL. This means, some data is not in the MySQL but in Memcached. So, when backup, the new entry is not in the backup data What is the right way to backup?

    Read the article

  • How do I access files inside a Wubi virtual ext4 Ubuntu partition from within Windows?

    - by aalaap
    I just installed Ubuntu 10.04 using Wubi on a PC that has Windows XP and Windows 7 installed. I was working in it for a while and everything is just fine. However, when I booted back into Windows 7, I couldn't figure out a way to access the files I had created or downloaded into the Ubuntu partition. They're in a virtual disk called root.disk in my C:\ubuntu\disks. Is there a way I can mount this vhd into Windows or at least browse the contents and extract what I need?

    Read the article

  • httpd (no pid file) not running while restarting apache

    - by user59503
    Hi I am working on ubuntu. I got the error messages while try to restart apache. root@XXX:/etc/init.d# sudo /etc/init.d/apache2 restart * Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using xxx.xxx.xx.xxx for ServerName httpd (no pid file) not running apache2: Could not reliably determine the server's fully qualified domain name, using xxx.xxx.xx.xxx for ServerName (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs I got the following message when tried netstat -pant tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 0 32748 9950/httpd tcp 429 0 xxx.xxx.xx.xxx:80 xxx.xxx.xx.xxx:xxxxx CLOSE_WAIT 0 0

    Read the article

  • vncviewer connection refused (61)

    - by coure2011
    I have a root access to VPS (centos 6). I have installed VNCServer using this guide line http://tournasdimitrios1.wordpress.com/2011/02/02/how-to-setup-vnc-server-on-centos-5-x-fedora-11/ Everything goes perfectly and server is running via termina. Now I am trying to connect to that server via vncviewer (mac os). but its giving me error Connection refused (61) I am providing only the IP address of the VPS, maybe I also needed port address? How to configure port on vncserver? or its something else?

    Read the article

  • How to determine if a file has been backed up?

    - by Console
    I try to consolidate old drives to new ones of larger capacity. Sometimes files have been renamed, but are otherwise identical. Sometimes an old directory has just a few more files in it than a newer directory with the same name. Sometimes a file has the same name but the size differs. So I often find myself asking the question: Are there any files on this old drive or directory that I haven't already copied to the new drive? I just want to know that I have the files, I don't want to try and sync stuff automatically (Syncing tools tend to just sync, creating duplicate folder structures and other problems, so I prefer to do it by hand). Basically, if an old drive has a file called "foo.bar" ten directories deep, and my new big drive has an identical file called "oldstuff.zip" in the root, I just want a "yes you have it" or "no, unique files exist". Is there a free tool, a script or a quick and easy method (Mac/Unix or Windows) to get the answer?

    Read the article

  • How to make nginx only respond to one domain?

    - by larryzhao
    I am pretty new to nginx, I host my rails application on nginx+passenger. I want my website to be accessible to only one domain. So I set my nginx conf like the following: server { listen 80; server_name mydomain.com www.mydomain.com; root /var/deploy/myapp/current/public; passenger_enabled on; location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 1y; add_header Cache-Control public; } } I specify the server_name directive, but still, it answers anything which points to this IP and I could see that in the access.log that it answers to other domain names. Is there anything I am doing wrong?

    Read the article

  • Install Eclipse / StatET on Debian server for all users.

    - by Joris Meys
    I've manually downloaded, unpacked and installed the latest Eclipse (3.6.1) on a debian server (2.6.26-2-amd64). Eclipse can now be run by all users in our group, but when I tried to install the StatET plugin, I quickly found out that this one was only visible and useable for me. I have a sudo password on my account and a root password. I wondered if sudo eclipse was all I needed to do, but as I'm very new to the whole sysadmin thing (our old one is on "prolonged leave" and currently working in Spain) I rather check before blowing up the server. Any help on how to configure Eclipse for all users simultaneously is very much appreciated.

    Read the article

  • I need a few minutes of dedicated server a week, but not for hosting, just to convert ogg etc

    - by talkingnews
    I'm completely happy with my webhosting, it's just that I need to do one little thing they won't allow, and that's run an instance of Sox to convert about 30 mp3s to ogg files, in various directories, a couple of times a week, to be done automatically in response to the detection of the upload of an mp3. Probably looking at a minute of server time over the whole week. I've had unhelpful suggestions on other forums like "why not leave your home PC on 24 hours a day and then use all your isp bandwidth to do this", which doesn't work for me. I know that I can host files on, say, Amazon S3, but is there something similar for my needs? All it would need to do would be: wget/ftp the mp3 files, convert them to ogg, ftp the files back to my hosting. Of course, all this wouldn't be needed if there was such a thing as a compiled binary of Sox (or any mp3ogg converter) for Centos which I could upload without needing root access, but I've given up asking that one, but always open to suggestions!

    Read the article

  • This weird behaviour from cronjob

    - by The DOCTOR from TARDIS
    I have set the crontab like this: */5 0 * * * /www/permitChat.sh and the /www/permitChat.sh is this: # We are setting the name of file # in the variable along with complete path. sFilePath=`date +\/www\/ChatLogs\/%Y\/%m/%d_%m_%Y.txt` # First we set its permissions to # readable by all users, and then # modify them to be writable by only root. chmod a=r $sFilePath chmod u+w $sFilePath ls -lh $sFilePath The trouble I am facing is, the cron gets executed after 12:00 PM everyday, instead of executing at 12:00 AM to 01:00 AM, every 5 minutes. What could be wrong? All my system variables appear to be synced.

    Read the article

  • Adding user to chroot environment

    - by Neo
    I've created a chroot system in my Ubuntu using schroot and debrootstrap, based on minimal ubuntu. However whenever I can't seem to add a new user into this chroot environment. Here is what happens. I enter schroot as root and add a new user.(Tried both adduser and useradd commands) The username lists up in /etc/passwd file and I can 'su' into the new user. So far so good. When I log out of schroot, and re-enter schroot, the user I created has vanished!! There is no mention of that user in /etc/passwd either. How do I make the new user permanent?

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • "killed" message from cron.daily, but not when run from command line

    - by Dan Stahlke
    On Fedora 17, I put a file into /etc/cron.daily with the following contents: cd / su dstahlke /home/dstahlke/bin/anacron-daily.sh exit 0 For some reason, I get a mail every day that just says /etc/cron.daily/dstahlke-daily: ...killed. I tried with and without the exit 0 line above (I noticed that some system scripts have that and others don't, I'm not sure of the purpose). Running /etc/cron.daily/dstahlke-daily from the command line as root produces no ...killed message. Other than the message, everything seems to work fine. Putting set -x in the above script, as well as in the /home/dstahlke/bin/anacron-daily.sh script shows that the ...killed message happens just after the latter script terminates (or perhaps just after the su command finishes). What causes the ...killed message? Or, is there a more acceptable way to have anacron run a user script daily? I figured that putting this in /etc/cron.daily would help the system coordinate all of the daily tasks rather than potentially running my task concurrently with the system tasks.

    Read the article

  • SugarCRM CE Won't Install on Ubuntu 10.10

    - by Trenton Scott
    I have a fresh copy of Ubuntu 10.10 server with a working LAMP installation. I downloaded SugarCRM and browsed to its directory to open the installer (via Firefox). The installer appears fine, I accept the license agreement, and it proceeds to check file permissions. It advises that several directories need looser permissions (chmod 766), and I adjust them accordingly. After making the changes, I click "recheck" and the page just reloads as blank (white). There are no errors visible, nothing in the server logs (Apache/PHP) and installation cannot continue. I'm able to get back to the installation tool by readjusting permissions back to my default (0755 for directories, 0644 for files). All files/folders are owned by root and the www-data group. Any idea about what's wrong?

    Read the article

  • how to set auto redirection in tomcat

    - by Registered User
    I have a site http://social.openitup.in right now what you are seeing is a default Tomcat6 page. I am using mod_ajp as a front end and Apache vhost configuration for same is <VirtualHost *:80 > ServerName social.openitup.in ServerAdmin webmaster@localhost ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPass / ajp://192.168.1.19:8009/ ProxyPassReverse / ajp://192.168.1.19:8009/ </VirtualHost> How ever I have an application running on it http://social.openitup.in/olat what I want to do is when some one opens http://social.openitup.in then rather than seeing Tomcat6 home page from /var/lib/tomcat6/webapps/ROOT/index.html the person is redirected to olat application which is in /var/lib/tomcat6/webapps/olat how can this be achived? The above vhost configuration is on a machine separate than where OLAT is running.

    Read the article

  • Problems connecting Centos on VMware to the network using bridged connection.

    - by Sakin
    Hi, I installed CentOs on VMware running on windows XP. When trying to configure it to connect to the internet in a bridged configuration, I get an error message when trying to bring up the network interface: [root@VMLinux ~]# /et/init.d/network start Bringing up loopback interface: [ OK ] Bringing up interface eth0: Determining IP information for eth0... failed [FAILED] VM is running on a machine that has access to the network, I tried it on two different networks that have DHCP enabled. Everything works fine when using a NAT connection through my host. How can I make the bridge work for me? Thanks.

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >