Search Results

Search found 13128 results on 526 pages for 'square root'.

Page 417/526 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • what is uninstall procedure for software installed via "make install" on CentOS 6.2

    - by gkdsp
    I installed OCILIB on my CentOS 6.2 server some time ago, and now I want to install a newer version. The vendor requires an uninstall, but doesn't provide instructions. I'm guessing that's because it's trivial for people with a Linux background. http://orclib.sourceforge.net/doc/html/group_g_install.html If I installed this software using: step 1: # ./configure --with-oracle-headers-path=/usr/include/oracle/11.2/client64 --with-oracle-lib-path=/usr/lib/oracle/11.2/client64/lib step 2: # make step 3: # su root step 4: # make install step 5: # gcc -g -DOCI_IMPORT_LINKAGE -DOCI_CHARSET_ANSI -L/usr/lib/oracle/11.2/client64/lib -lclntsh -L/usr/local/lib -locilib conn.c -o conn How would I go about uninstalling this? I tried following this http://www.cyberciti.biz/faq/delete-uninstall-software-linux-commands/ but nothing was found on my disk using rpm -qa *oci* or yum list *oci*. Maybe since it wasn't installed with yum or rpm then I shouldn't expect either of these to find it. Are there generic instructions for uninstalling software on Linux that I could use, or do the instructions really depend on the specific software? Any help much appreciated.

    Read the article

  • Kickstart installation from USB -- Kickstart location

    - by dooffas
    After managing to get a Fedora ISO to rebuild successfully (for a USB stick) after adding a kickstart file (http://serverfault.com/questions/548405/), I now have an issue with locating the kickstart file on the USB media. When this is done from a CDROM you can simply kickckstart by adding this parameter to boot: linux ks=cdrom This will kickstart (providing the kickstart file is named ks.cfg and is in the root of the disk). Now, obviously this will be different for the USB drive, so from my research, I assumed that this line would do the job: linux ks=hd:sdb1:/ks.cfg Evidently this does not work. I get an error informing me this drive is already mounted and cannot be remounted. EDIT: Actual error message: mount: /dev/sdb1 is already mounted or /run/install/tmpmnt0 busy Warning: Can't get kickstart from /dev/sdb1:/ks.cfg To test that the syntax was correct I placed the kickstart file on another USB stick and loaded the same command to grab ks.cfg from the new location: linux ks=hd:sdc1:/ks.cfg This does work (providing USB sticks are mounted in order, boot - sdb1, kickstart - sdc1). The install will kickstart and complete the install with no issue. Obviously having to use 2 pen drives is somewhat frustrating and unreliable. Is there a way around this?

    Read the article

  • Unable to start tomcat6: Java error (Exception in thread "main")

    - by Nyxynyx
    After installing tomcat6 on CentOS 6.3, I am unable to start tomcat6 server. root@host [/var/log/tomcat6]# service tomcat6 start Starting tomcat6: [ OK ] Although it says OK, I cant access http://mydomain.com:8080. catalina.out Exception in thread "main" java.lang.NullPointerException at java.lang.VMClassLoader.defineClass(libgcj.so.10) at java.lang.ClassLoader.defineClass(libgcj.so.10) at java.security.SecureClassLoader.defineClass(libgcj.so.10) at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Tomcat6 was installed using yum: yum -y install java tomcat6 tomcat6-webapps tomcat6-admin-webapps When I tried to find the version: tomcat6 version: Exception in thread "main" java.lang.NoClassDefFoundError: org.apache.catalina.util.ServerInfo at gnu.java.lang.MainThread.run(libgcj.so.10) Caused by: java.lang.ClassNotFoundException: org.apache.catalina.util.ServerInfo not found in gnu.gcj.runtime.SystemClassLoader{urls=[], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.10) at gnu.gcj.runtime.SystemClassLoader.findClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at java.lang.ClassLoader.loadClass(libgcj.so.10) at gnu.java.lang.MainThread.run(libgcj.so.10) Any idea what I should do? Thanks!

    Read the article

  • A separate user for each task?

    - by Mark Tomlin
    I just got a VPS sver the other day, I'm new to server administration, but not that new to Ubuntu (11.04). I use it in my living room as the HTPC, and I had a previous VPS that I used on and off for a team speak server. This one I'm setting up for long term use. So I would like to know the best practice when it comes to websites and tasks that I have the server proforming. I understand that it could be beneficial to separate each website into it's own usergroup or under its own username. I would setup nginx so that it could read all of the users directors (and thus each website) but could not touch anything else. The same with the TeamSpeak, should I make a user for TeamSpeak so that it operates within its own confined area or is this overkill? I do have access to root on the sever and my current plan is to run about 4 websites and a TeamSpeak server. My stack is Linux (Ubuntu 11.04 LTS), nginx, and PHP 5.4.3 (using the PDO SQLite 3 built in driver for the database). Should PHP have it's own user group or is it ok to place it in with nginx?

    Read the article

  • supervisord launches with wrong setuid

    - by friendzis
    I am trying to test a pilot system with nginx connecting to uwsgi served application controlled by supervisord running on ubuntu-server. Application is written in python with Flask in virtualenv, although I'm not sure if that is relevant. To test the system I have created a simple hello world with flask. I want nginx and uwsgi both to run as www-data user. If I launch uwsgi "manually" from root shell I can see uwsgi processes runing as appropriate user (www-data). Although, if I let supervisor launch the application something strange happens - uwsgi processes are runing under my user (friendzis). Consequently, socket file gets created under wrong user and nginx cannot communicate with my applicaion. note: the linux server runs as Hyper-V VM, under Windows Server 2008. Relevant configuration: [uwsgi] socket = /var/www/sockets/cowsay.sock chmod-socket = 666 abstract-socket = false master = true workers = 2 uid = www-data gid = www-data chdir = /var/www/cowsay/cowsay pp = /var/www/cowsay/cowsay pyhome = /var/www/cowsay module = cowsay callable = app supervisor [program:cowsay] command = /var/www/cowsay/bin/uwsgi -s /var/www/sockets/cowsay.sock -w cowsay:app directory = /var/www/cowsay/cowsay user = www-data autostart = true autorestart = true stdout_logfile = /var/www/cowsay/log/supervisor.log redirect_stderr = true stopsignal = QUIT I'm sure I'm missing some minor detail, but I'm unable to notice it. Would appreciate any suggestions.

    Read the article

  • Wireless USB keyboard and mouse can wake system, but then receiver is inactive

    - by BlueMonkMN
    I have a Microsoft brand USB device that acts as a receiver for a wireless Microsoft Keyboard and a wireless Mouse. When it's operating normally, there are LEDs on the device indicating Caps Lock, Num Lock and Function Lock, of which the latter 2 are usually lit. It is plugged into a Dell Isnpiron 531 with Windows 7 32-bit running on an AMD Athlon 64 X2 Dual Core processor 5000+. When the computer goes to sleep (the power indicator on the main box is flashing), I can wake it by moving the mouse. So far all is good. However, something changed in, I think, the past couple weeks (I suspect due to a Microsoft driver update problem). Before the change, after waking the computer, everything would operate normally as far as I could tell, but now after waking the computer, the receiver has no lights on, and the keyboard and mouse are completely unresponsive (which is odd, considering the mouse woke up the computer). There is a button on the receiver that's supposed to reset the wireless connection and flash the lights while it does so, but it has no effect in this state. It's like the receiver doesn't have power (but how would the system know I moved the mouse, unless the power was on until it woke up?). I have checked the BIOS/CMOS settings or whatever you call them, and did not see anything related to USB in the power management section. I have checked Windows 7 device manager and ensured that all the USB Root Hub devices have the setting unchecked for allowing the USB power to be turned off. Like I said, this was working before, and the only thing I can think of that's changed is applying Windows Updates.

    Read the article

  • mysql is not connecting after data directory change

    - by user123827
    I've changed data directory in /etc/my.cnf. datadir=/data/mysql socket=/data/mysql/mysql.sock I also moved mysql folder from /var/lib/mysql/ to /data/mysql Now when i connect to mysql i get following error: [root@youradstats-copy mysql]# mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2) also when i see /var/logs/msqld.log i get following messages in that: InnoDB: Setting log file /data/mysql/ib_logfile0 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 120704 7:43:31 InnoDB: Log file /data/mysql/ib_logfile1 did not exist: new to be created InnoDB: Setting log file /data/mysql/ib_logfile1 size to 512 MB InnoDB: Database physically writes the file full: wait... InnoDB: Progress in MB: 100 200 300 400 500 InnoDB: Cannot initialize created log files because InnoDB: data files are corrupt, or new data files were InnoDB: created when the database was started previous InnoDB: time but the database was not shut down InnoDB: normally after that. 120704 7:43:36 [ERROR] Plugin 'InnoDB' init function returned error. 120704 7:43:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. I shut down mysql properly before doing these changes and then started it properly but dont know why getting these messages. please help to solve issue as i have changed socket path in my.cnf but still its pointing to old path...

    Read the article

  • Centos: installing SVN tells I don't have perl 1.17. I have installed 5.8

    - by Emerson
    I'm trying to install SVN on a CentOS virtual machine. I used the command that the CentOS wiki tells: http://wiki.centos.org/HowTos/Subversion yum install mod_dav_svn subversion It gives me a few errors: --> Finished Dependency Resolution mod_dav_svn-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) subversion-1.4.2-4.el5_3.1.i386 from base has depsolving problems --> Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: perl(URI) >= 1.17 is needed by package subversion-1.4.2-4.el5_3.1.i386 (base) Error: Missing Dependency: httpd-mmn = 20051115 is needed by package mod_dav_svn-1.4.2-4.el5_3.1.i386 (base) The thing is that I have Perl 5.8 installed: root@server [~]# rpm -q perl perl-5.8.8-32.el5_5.2 I also don't know why it tells httpd-mmn isn't installed. I have apache installed for sure. From what I read here, it seems I would need to recompile apache. www.sitepoint.com /forums/showthread.php?t=485683 Any ideas? Update: I also tried to install subversion via WHM (11.28.35) and it gives me the same error. By the way, WHM it says: CENTOS 5.5 i686 virtuozzo on server

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • Zero sized tar.gz file found inside a tar.gz file

    - by PavanM
    My current directory contains a single file like this- $ls -l -rw-r--r-- 1 root staff 8 May 28 09:10 pavan Now, I want to tar and gzip this file like $tar -cvf - * 2>/dev/null |gzip -vf9 > pavan.tar.gz 2>/dev/null (I am aware I am creating the zipped file in the same directory as the original file) When I run the above tar/gzip commands around 20 times, a few times I observe that the final tarred and zipped file pavan.tar.gz file has a ZERO sized pavan.tar.gz file. I am not sure from where is this zero sized file coming into the archive from. Note: I am NOT running tar/gzip commands on an already existing tar.gz file. I always make sure that the directory has only one file before running the commands On googling, as described here, I suspected that the tar.gz being created was also part of the file being archived. But in my case, gzip is the one who's creating the final file and by the time gzip runs, tar should be done tarring. This is happening on AIX but I've used Linux tag too, to draw more attention, as I guess the problem is platform independent.

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    Hi. I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • SSL with nginx on subdomain not working

    - by peppergrower
    I'm using nginx to serve three sites: example1.com (which redirects to www.example1.com), example2.com (which redirects to www.example2.com), and a subdomain of example2.com, call it sub.example2.com. This all works fine without SSL. I recently got SSL certs (from StartSSL), one for www.example1.com, one for www.example2.com, and one for sub.example2.com. I got them set up and everything seems to work (I'm using SNI to make all this work on a single IP address), except for sub.example2.com. I can still access it fine over non-SSL, but on SSL I just get a timeout. If I go directly to my server's IP address, I get served the SSL certificate for sub.example2.com, so I know nginx is loading the certificate properly...but somehow it doesn't seem to be listening for sub.example2.com on port 443, even though I told it to. I'm running nginx 1.4.2 on Debian 6 (squeeze); here's my config for sub.example2.com (the other domains have similar configs): server { server_name sub.example2.com; listen 80; listen 443 ssl; ssl_certificate /etc/nginx/ssl/sub.example2.com/server-unified.crt; ssl_certificate_key /etc/nginx/ssl/sub.example2.com/server.key; root /srv/www/sub.example2.com; } Does anything look amiss? What am I missing? I don't know if it matters, but StartSSL lists the base domain as a subject alternative name (SAN); not sure if that would somehow pose problems, if both subdomains list the same SAN.

    Read the article

  • Different nginx rules based on referrer

    - by juana
    I'm using WordPress with WP Super Cache. I want visitors who come from Google (That inlcudes all country specific referrers like google.co.in, google.co.uk and etc.) to see uncached contents. There are my nginx rules which are not working the way I want: server { server_name website.com; location / { root /var/www/html/website.com; index index.php; if ($http_referer ~* (www.google.com|www.google.co) ) { rewrite . /index.php break; } if (-f $request_filename) { break; } set $supercache_file ''; set $supercache_uri $request_uri; if ($request_method = POST) { set $supercache_uri ''; } if ($query_string) { set $supercache_uri ''; } if ($http_cookie ~* "comment_author_|wordpress|wp-postpass_" ) { set $supercache_uri ''; } if ($supercache_uri ~ ^(.+)$) { set $supercache_file /wp-content/cache/supercache/$http_host/$1index.html; } if (-f $document_root$supercache_file) { rewrite ^(.*)$ $supercache_file break; } if (!-e $request_filename) { rewrite . /index.php last; } } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/html/website.com$fastcgi_script_name; include fastcgi_params; } } What should I do to achieve my goal?

    Read the article

  • Postfix rewrite sender: why doesn't this work

    - by Nick Coleman
    I have server A with an IP address only and a dummy FQDN (on the basis all machines should have a FQDN): pants.net.invalid. All mail is relayed through another server elsewhere, which works fine. On server A, Postfix rewrites the sender address with smtp_generic_maps = hash:/etc/postfix/generic. According to the Rewrite manual at http://www.postfix.org/ADDRESS_REWRITING_README.html#remote, this should rewrite all outgoing external mail's Sender address: $ cat /etc/postfix/generic @pants.net.invalid [email protected] but it does not. postmap -q [email protected] returns nothing. This works: [email protected] [email protected] It seems as though it is doing regex matching even though I specify type hash:. Clearly I am misunderstanding the manual. I don't want to use regex or pcre expressions because there are only a couple of users (root and two others) and I don't want the overhead. I can specify the users exactly and it works. But, I would like to know what I am misunderstanding for future reference. Thanks.

    Read the article

  • I get a Segmentation fault when doing apt-get util-linux

    - by Adam
    I've found that a lot of upgrade commands and Apache on my system are failing with Segmentation faults. I don't know if this is the main one, but a lot of packages depend on util-linux: root@myUbuntuHardyHeronServer:~# apt-get install util-linux Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: util-linux 1 upgraded, 0 newly installed, 0 to remove and 72 not upgraded. 20 not fully installed or removed. Need to get 0B/441kB of archives. After this operation, 0B of additional disk space will be used. (Reading database ... 20547 files and directories currently installed.) Preparing to replace util-linux 2.13.1-5ubuntu2 (using .../util-linux_2.13.1-5ub untu3.1_i386.deb) ... Unpacking replacement util-linux ... Segmentation fault dpkg: warning - old post-removal script returned error exit status 139 dpkg - trying script from the new package instead ... Segmentation fault dpkg: error processing /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386 .deb (--unpack): subprocess new post-removal script returned error exit status 139 Segmentation fault dpkg: error while cleaning up: subprocess post-removal script returned error exit status 139 Errors were encountered while processing: /var/cache/apt/archives/util-linux_2.13.1-5ubuntu3.1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • OSX: Mimic Ubuntu IP Masquerading via iptables with ipfw

    - by Dogbert
    Good day, I am attempting to replicate a setup I have between a router and an Ubuntu PC, and have the same setup working on my MacBook (10.6, Snow Leopard). First, I have a router that has a USB port. When I plug it into my Ubuntu PC, it creates an RNDIS connection, allowing me to connect to the router over the USB cable via an IP connection. When I plug it into my computer via USB, it gets assigned an IP address of 172.16.84.1, and a new adapter appears when I type ifconfig. I can then SSH into the device via ssh [email protected]. When I log in to the device, I flush the routes, then create the default route: admin@localhost> route -f admin@localhost> route add default 172.16.84.2 Now, in my Ubuntu machine, I use iptables to enable IP masquerading: root@Valhalla> sudo iptables -t nat -A POSTROUTING -s 172.16.84.2 -j MASQUERADE Once this is all done, the router has internet access over the USB connection to my PC. I am trying to replicate this exact setup on my MacBook now (Snow Leopard), but iptables does not exist for OSX, not even a Macports version exists. I have scoured through other questions on StackOverflow that cover the usage of the ipfw command, which apparently works as a drop-in replacement for iptables. However, the syntax is significantly different, and I'm pretty much lost. Does anyone with some experience with ipfw have some suggestions on how I could accomplish this and create a NAT connection via IP masquerading like I could with my Ubuntu PC? Thank you for your assistance.

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • Apache2 with lighttpd as proxy

    - by andrzejp
    Hi, I am using apache2 as web server. I would like to help him lighttpd as a proxy for static content. Unfortunately I can not well set up lighttpd and apache2. (OS: Debian) Important things from lighttpd.config: server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_proxy", "mod_status", ) server.document-root = "/www/" server.port = 82 server.bind = "localhost" $HTTP["remoteip"] =~ "127.0.0.1" { alias.url += ( "/doc/" => "/usr/share/doc/", "/images/" => "/usr/share/images/" ) $HTTP["url"] =~ "^/doc/|^/images/" { dir-listing.activate = "enable" } } I would like to use lighttpd in only one site operating as a virtual directory on apache2. Configuration of this virtual directory: ProxyRequests Off ProxyPreserveHost On ProxyPass /images http://0.0.0.0:82/ ProxyPass /imagehosting http://0.0.0.0:82/ ProxyPass /pictures http://0.0.0.0:82/ ProxyPassReverse / http://0.0.0.0:82/ ServerName MY_VALUES ServerAlias www.MY_VALUES UseCanonicalName Off DocumentRoot /www/MYAPP/forum <Directory "/www/MYAPP/forum"> DirectoryIndex index.htm index.php AllowOverride None ... As you can see (or not;)) my service is physically located at the path: / www / myapp / forum and I would like to support lighttpd dealt with folders: / www / myapp / forum / images / www / myapp / forum / imagehosting / www / myapp / forum / pictures and left the rest (PHP scripts) for apache After running lighttpd and apache2 working party, but did not show up any images of these locations. What is wrong?

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Windows 2008 R2 forgets static IP configuration after reboot

    - by Andrew
    I've got an issue where a Windows 2008 R2 Standard (SP1) server loses its static IP configuration upon a reboot. It's a sysprep'd image. The following steps reproduces the problem: Using the SAC, set the IP using 'i' Use the Win32 EnableStatic() method to set an IP (and then SetGateways()) through PowerShell Reboot The machine boots up with the following configuration: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : [...] Autoconfiguration IPv4 Address. . : 169.254.152.31 (incorrect) Subnet Mask . . . . . . . . . . . : 255.255.0.0 (incorrect, was set to /24) Default Gateway . . . . . . . . . : 1.1.1.1 (correct) Occasionally, the gateway is also incorrect (0.0.0.0) The images have a script that runs 'netsh int ip reset' after sysprep finishes (before the reboot), so it appears that does not solve the issue. (the problem also happens without this step) After the reboot, using 'i' on the SAC resolves the issue permanently. (But I'd like to know the root cause as having to run 'i' again isn't ideal)

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Why does my Internet slow to a crawl unless I reboot my router every few days?

    - by Lord Torgamus
    A few weeks ago, I noticed that my Internet connection had slowed down to a crawl. I waited a few days hoping it would go away on its own, but it didn't get better. So I asked this question about how to make it faster. The problem went away after I updated to the latest firmware, so I didn't follow up too carefully. But every few days since then, my Internet has slowed down again. Unlike before, all I have to do to fix it is open the router administration page and press the "Reboot" button. Nothing else seems to work, though I'm sure there are options I haven't tried. If it makes a difference, my girlfriend and I both transfer large amounts of data fairly routinely for school (videoconferencing, downloading entire recorded lectures). The router is a Cisco/Linksys 160N V3 that's about a year old. Most of the time, it deals with just two standard Windows 7 laptops. The only thing I came across while searching for answers/dupes was this question, which seems similar superficially, but probably doesn't have the same root issue. Anyways, it's not resolved. What could be causing these slowdowns, and how can I get rid of them?

    Read the article

  • ssh timeout issue connecting to an EC2 instance on OS X

    - by mamusr
    I am new to AWS and not a networking expert but curious to know more about it. I created a VPC with a public subnet only. Then i created an EC2 instance using an Ubuntu 14.04 64-bit pv AMI image (ami-e84d8480) as well generating the key pair needed to connect to it through ssh. I followed amazon's instructions to connect to an EC2 instance via ssh which did not work. Here is my attempted input and debug log: Running on OS X 10.9.4 user$ ssh -vvv -i key.pem [email protected] OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 102: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to xxx.xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: connect to address xxx.xxx.xxx.xxx port 22: Operation timed out ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out To attempt to resolve the issue: I enabled the SSH port. Tried different usernames other than ubuntu, like ec2-user and root. Initially set an inbound ssh rule in the security group to connect to only my ip address. When that did not work, i changed it to allow any ip to connect. But those actions did not fix the problem. Here are my guesses as to what i am missing in getting the EC2 instance connection to work. My etc/ssh_config file may be preventing the connection from taking place. I may have missed an important networking detail when setting up the VPC. I do not have a public ip address specified for the instance. I am connecting through the private ip address. My questions for the community: Am i going about it the wrong way connecting to the instance through the private ip address? if so, do i need to specify a public ip address for it to connect or some other method?

    Read the article

  • How to enable synergy 24800 (or some other port) through firewalld

    - by ndasusers
    After upgrading to Fedora 18, Synergy, the keyboard sharing system was blocked by default. The culprit was firewalld, which happily ignored my previous settings made in the Fedora GUI, backed by iptables. ~]$ ps aux | grep firewall root 3222 0.0 1.2 22364 12336 ? Ss 18:17 0:00 /usr/bin/python /usr/sbin/firewalld --nofork david 3783 0.0 0.0 4788 808 pts/0 S+ 20:08 0:00 grep --color=auto firewall ~]$ Ok, so how to get around this? I did sudo killall firealld for several weeks, but that got annoying every time I rebooted. It was time to look for some clues. There were several one liners, but they did not work for me. They kept spitting out the help text. For example: ~]$ sudo firewall-cmd --zone=internal --add --port=24800/tcp [sudo] password for auser: option --add not a unique prefix Also, posts that clamied this command worked also stated it was temporary, unable to survive a reboot. I ended up adding a file to the config directory to be loaded in on boot. Would anyone be able to have a look at that and see if I missed something? Though synergy works, when I run the list command, I get no result: ~]$ sudo firewall-cmd --zone=internal --list-services ipp-client mdns dhcpv6-client ssh samba-client ~]$ sudo firewall-cmd --zone=internal --list-ports ~]$

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >