Search Results

Search found 4864 results on 195 pages for 'resolv conf'.

Page 132/195 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • Why is my nginx alias not working?

    - by Rob
    I'm trying to set up an alias so when someone accesses /phpmyadmin/, nginx will pull it from /home/phpmyadmin/ rather than from the usual document root. However, everytime I pull up the URL, it gives me a 404 on all items not pulled through fastcgi. fastcgi seems to be working fine, whereas the rest is not. strace is telling me it's trying to pull everything else from the usual document root, yet I can't figure out why. Can anyone provide some insight? Here is the relevant part of my config: location ~ ^/phpmyadmin/(.+\.php)$ { include fcgi.conf; fastcgi_index index.php; fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_param SCRIPT_FILENAME /home$fastcgi_script_name; } location /phpmyadmin { alias /home/phpmyadmin/; }

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Yum install error (mysql-devel) depsolve

    - by Pasta
    I get the following error on yum install mysql-devel. Can anyone help? I dont have this in my /etc/yum.conf exclude list. --> Finished Dependency Resolution mysql-server-5.0.45-7.el5.x86_64 from installed has depsolving problems --> Missing Dependency: mysql = 5.0.45-7.el5 is needed by package mysql-server-5.0.45-7.el5.x86_64 (installed) Error: Missing Dependency: mysql = 5.0.45-7.el5 is needed by package mysql-server-5.0.45-7.el5.x86_64 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest Please help!

    Read the article

  • some issues with removing www and redirecting index.html

    - by MariaKeys
    Hello Fellas, I am having trouble doing what i want to do with the following setup. I would like to remove all WWW, and also forward index.html to root dir. I would like this to be for all domains, so i am doing inside httpd.conf directory directive. I tried many variations with no success. Latest version is below (domains are inside /var/www/html, in seperate directories). http://www.example.com/index.html > http://example.com http://www.example.com/someother/index.html > http://example.com/someother/ Thanks, Maria <Directory "/var/www/html/*/"> RewriteEngine on RewriteBase / RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] #RewriteCond %{REQUEST_URI} /^index\.html/ RewriteRule ^(.*)index\.html$ / [R=301,L] Options ExecCGI Includes FollowSymLinks AllowOverride AuthConfig AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • asterisk outbound caller id

    - by MCS
    I'm trying to set the caller id number for an outbound call. My asterisk .call file looks like this: Channel: SIP/flowroute/1234567890 Context: test Extension: 1234567890 Priority: 1 Here's my extensions.conf: [test] exten => _1NXXXXXXXXX,1,Set(CALLERID(num)=8005552222) exten => _1NXXXXXXXXX,n,Dial(SIP/${EXTEN}@flowroute) exten => _1NXXXXXXXXX,n,Playback(hello-world) When I receive the call, the caller id number is 1-206-445-6979, even though the CDR log has both src and clid set to 8005552222. I'm using flowroute as my carrier. Is there something wrong on their side?

    Read the article

  • one share include more shares in diffrent premission

    - by saber
    hi all ubuntu 8.04 \ samba I want at the opening share \my_host there was the directory in which will be catalogs with different rights (eg the user with the IP is allowed to write only in one directory) example \\my_host\folder --\folder1 -user_ip1 can write to folder --\folder2 -user_ip2 .... --\folder3 my smb.conf [filials] path = /var/filials comment = No comment ;admin users = nobody ;directory mask = 755 ;read only = no available = yes browseable = yes writable = yes guest ok = yes public = yes printable = no share modes = yes ;locking = yes [filials\user1] path = /var/filials/user1 comment = No comment ;admin users = nobody ;directory mask = 755 ;read only = no available = yes browseable = yes writable = yes guest ok = yes public = yes printable = no share modes = yes ;locking = yes what is write [filials\user1] so user1 was in the catalog filials

    Read the article

  • Squid3 not working. Access denied.

    - by Nitish
    I installed SQUID3 on a Linux machine with two ethernet interfaces (eth0 and eth1). I used the default settings in the squid.conf file and uncommented the two lines acl localnet src 192.168.0.0/16 and http_access allow localnet. eth0 is connected to a router, which provides Internet access. It is assigned an IP 192.168.1.2 by the router. I manually configured eth1 to have an IP address 192.168.5.1. It is connected to a switch. Systems having IP addresses 192.168.5.x are connected to this switch. I ran these two commands for NAT: iptables -t nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.5.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 But when I try to access internet from a system having IP 192.168.5.2 through the proxy I get an error that says "Access denied". What is wrong with my configuration?

    Read the article

  • Nginx - Address already in use

    - by user2426362
    If I run: service nginx restart I have this error: root@user /etc/nginx/sites-enabled # service nginx restart Restarting nginx: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use) nginx: [emerg] still could not bind() nginx. How to fix it? I have also apache conf running on port 80.

    Read the article

  • searchd under runit continues writing to the runit's log

    - by Eugene
    searchd (Sphinx) run file: #!/bin/sh set -e APP_PATH=/srv/application TARGET_USER=user exec chpst -u $TARGET_USER /usr/bin/searchd --pidfile --nodetach --config $APP_PATH/current/config/production.sphinx.conf tail /var/log/sphinx/current 2014-06-07_18:13:56.87885 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.13740 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.88113 precached 9 indexes in 0.497 sec 2014-06-07_18:13:57.89167 precached 9 indexes in 0.497 sec 2014-06-07_18:13:59.75555 precached 9 indexes in 0.497 sec 2014-06-07_18:13:59.81554 precached 9 indexes in 0.497 sec 2014-06-07_18:14:00.33466 precached 9 indexes in 0.497 sec ... it continues to write the same line until sv stop sphinx ... Everything works fine, seachd starts and responds to the queries. But how to make logs to be less repetitive? When I start Sphinx manually it prints the "precached 9 indexes" just once.

    Read the article

  • Setup IPv4 local on IPv6 VPS

    - by A.D.
    I have a dedicated server running multiple IPv6 only OpenVZ containers. I want them to be able to communicate with the IPv4 internet, but I realized that isn't going to be possible with IPv6 only. So they need to have an IPv4 address as well, not sure if a local address will work for it, but pretty sure it should. I added 169.254.1.100 in the container .conf file, but when I try to start it, I get this : Adding IP address(es): (the IPv6 address) 169.254.1.100 arpsend: 169.254.1.100 is detected on another computer : 00:04:9b:f2:b0:00 vps-net_add WARNING: arpsend -c 1 -w 1 -D -e 169.254.1.100 eth0 FAILED I did a lot of research, and searched serverfault before posting this, but found nothing relating to this.

    Read the article

  • Wamp server won't run [closed]

    - by Alegro
    win xp sp3 I installed wamp 2.2 and after starting it was allways orrange and offline. Clicking on "Put Online" I got the error: wampserver aestan tray menu - could not execute menu item (internal error)... Somewhere I found the advice to change httpd.conf file (Listen 80 - to Listen 8080) Now, mouseover try icon shows that server is online, but it is still orrange, and click on localhost shows: Firefox can't establish a connection to the server at localhost. Skype is not running and inside Skype options "Use port 80 and 443 as alternatives for incoming connections" is unchecked. A couple of months ago I was able to run wamp normaly. Could someone help, pls.

    Read the article

  • htaccess not found

    - by clarkk
    I have installed a Apache 2 (from webmin) server on Debian 6.. I have setup a virtual host db.domain.com on the server which works fine, but .htaccess doesn't work if you get access from the ip address and the directory is listed if no index.php is found? db.domain.com -> 403 forbidden xxx.xxx.xxx.xxx -> gets access to the server Why is .htaccess omitted when you get access from the servers ip address? httpd.conf <Directory *> Options -Indexes FollowSymLinks </Directory> <VirtualHost *:80> ServerName db.domain.com DocumentRoot /var/www </VirtualHost> htaccess order deny,allow deny from all

    Read the article

  • Fresh install of nginx causes browser to download index.html instead of opening it

    - by 010110110101
    When I view this in Chrome, http://localhost:90 the file is downloaded instead of displayed in Chrome. This question has been asked a lot of times on SO, but about index.php files. My problem is a plain jane HTML file, not a PHP file. That hasn't been asked yet. I was hoping the solution would be similar, but I haven't been able to figure it out. Here's my example.com.conf: server { server_name localhost; listen 90; root /var/www/example.com/html index index.html location / { try_file $uri $uri/ =404; } } My index.html file contains only two words, no markup Hello World I think it's the mime.types. The mime.types file has the entry for html in it. This is a fresh nginx install. nginx -t reports "test is successful"

    Read the article

  • Synergy configuration with multiple X screens

    - by Rob Drimmie
    I'm having a problem figuring out how to configure synergy to behave on a system with multiple X windows. On my desktop I am running Ubuntu 10.04 LTS. I have two monitors, setup as separate X screens by preference as well as to enable me to rotate the left-hand monitor. I also have a laptop, which I have on the desk in front of me, lower than the other two monitors. I have a very simple synergy.conf: section: screens desktop: laptop.local: end section: links desktop: down = laptop laptop: up = desktop end It works, but on the desktop only on whichever screen I run synergys from in terminal (I haven't set it up to run at startup yet because I've been playing with the configuration). I can't find any information how to reference multiple screens on one system, and would appreciate any help.

    Read the article

  • Setting up Django application on lighttpd behind apache reverse proxy

    - by ml256
    I have a Django app at http://some_other_example.com (it will be behind firewall) running on lighttpd server with fastcgi. I need make it available under http://example.com/myapp. It works fine except for redirects - when I login from http://example.com/myapp/login it redirects me to http://example.com instead of http://example.com/myapp. When logging-in from http://some_other_example.com/login it is ok. My configuration: apache2.conf at example.com: ProxyPass /myapp http://some_other_example.com ProxyPassReverse /myapp http://some_other_example.com ProxyHTMLURLMap http://some_other_example.com /myapp <Location /myapp> SetOutputFilter proxy-html ProxyHTMLExtended On ProxyHTMLURLMap / /myapp/ </Location> in settings.py I added USE_X_FORWARDED_HOST = True but it didn't help

    Read the article

  • Right solution for /etc/hosts file reset on reboot

    - by user846226
    i've just installed funtoo and after setting the FQDN on /etc/conf.d/hostname i noticed when setting a list of aliases in /etc/hosts file it get overwtiten on each reboot. Someone points to set the aliases to 127.0.0.2 ip address but that's not a valid solution for me. Could someone point me to the file where i should place entries like 127.0.0.1 local.foo 127.0.0.1 local.bar in order to make them persist in /etc/hosts after rebooting? Thanks! PD: I think openresolv could be the one who is overwritting the file.

    Read the article

  • How to allow Hudson build URL through Nginx auth_basic?

    - by rodreegez
    Hi, I have Hudson running and made available to the world via nginx. I have protected Hudson with nginx's auth_basic and that works great. The trouble is, I want to allow unauthenticated requests to the build URL, i.e. /job/<job_name>/build. Currently I have this in my nginx conf: upstream hudson { server 127.0.0.1:8888; } server { server_name ci.myurl.com; root /var/lib/hudson; location / { proxy_pass http://hudson/; auth_basic "Super secret stuff"; auth_basic_user_file /var/opt/hudson/htpasswd; } location ~ \/build { auth_basic off; } } I can't get that second location to allow unauthenticated requests. I have tried various combinations of location ~ /job/(.*)/biuld { } location ^~ \/build { } location ~ \/job\/(.*)\/build { } etc... Maddening! Can anyone point me in the right direction? Thanks, Ad.

    Read the article

  • Rewrite the Base URL with mod_rewrite

    - by rotespferd
    My Domain example.com points to the directory public_html. In the directory public_html/php is my index.php file. Now I want that the URL example.com points to *public_html/php/index.php*. I must do this with mod_rewrite because I have no access to the httpd.conf to do something wth Alias oder DocumentBase. In the directory public_html is my .htacces filewith the following content: RewriteEngine on RewriteCond %{HTTP_HOST} exaple.com$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /php/index.php [L,QSA] This do half of the job, because when I enter something like example.com/s in my browser it points to *public_html/php/index.php* as I want it to do. But when I just enter example.com it points to *public_html*. What can I do to fix this?

    Read the article

  • Can't get samba to see other PCs in Kubuntu 10.04 (Lucid)

    - by MaurizioPz
    I'm new to networking. I'm trying to share a folder between to computers (both have kubuntu 10.04 installed). I'm able to share a folder with samba and can see that folder through samba on the same computer. But if I try to go on the other PC I can't see the first one. Both PCs are on the "workgroup" workgroup. I've tried disabling the firewall with firestarter can somebody help me? thanks update: here's my samba.conf http://pastebin.com/SpuES468

    Read the article

  • Proftpd: How to set default root to a users home directory without jailing the user?

    - by sacamano
    Hi there. I've installed proftpd on my debian box but I'm having having some trouble with the configuration. In my proftpd.conf I've added; DefaultRoot ~ !ftp_special This works fine in that all users except members of ftp-special are unable to navigate outside of their home folder. However, I want users that are members of ftp-special to enter a special home folder when logging on to the ftp server but at the same time I want them to be able to navigate the entire server. Right now, if a user that is a member of ftp-special logs on his entry-point is the root ( / ). Thanks in advance.

    Read the article

  • I can not cd "LaunchAgents" on macbook

    - by why
    after installing mongdb on my macbook-pro, it tells me: If this is your first install, automatically load on login with: cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist If this is an upgrade and you already have the org.mongodb.mongod.plist loaded: launchctl unload -w ~/Library/LaunchAgents/org.mongodb.mongod.plist cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist Or start it manually: mongod run --config /usr/local/Cellar/mongodb/1.6.3-x86_64/mongod.conf but after i copy org.mongodb.mongod.plist to ~/Library/LaunchAgents, it tells me launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist launchctl: Couldn't stat("/Users/liuqiang/Library/LaunchAgents/org.mongodb.mongod.plist"): Not a directory and also i can not cd "~/Library/LaunchAgents", but i can ls the directory! "~/Library/LaunchAgents" is a strange directory in mac.

    Read the article

  • Sudo asks for password twice with LDAP authentication

    - by Gnudiff
    I have Ubuntu 8.04 LTS machine and Windows 2003 AD domain. I have succesfully set up that I can log in with domain username and password, using domain prefix, like "domain+username". Upon login to machine it all works first try, however, for some reason when I try to sudo my logged in user, it asks for the password twice every time when I try sudo. It accepts the password after 2nd time, but not the first time. Once or twice I might think I just keep entering wrong pass the first time, but this is what happens always, any ideas of what's wrong? pam.conf is empty pam.d/sudo only includes common-auth & common-account, and common-auth is: auth sufficient pam_unix.so nullok_secure auth sufficient pam_winbind.so auth requisite pam_deny.so auth required pam_permit.so

    Read the article

  • How to "debug" a keyboard in Linux? Like pressing a key and seeing a code in a terminal.

    - by Somebody still uses you MS-DOS
    I didn't have an answer to my problem about adding additional keyboards in my Ubuntu 10.04. Questions mark is not working in my keyboard, only using Alt Gr key + W. So, I don't know if this is a problem with Ubuntu or Virtualbox itself (I'm running it inside a VM). I would like to debug this problem. The keyboard is plugged in, so when I press a key I believe something is being sent to my operating system, some code, I don't know. I would like to digg this problem, find some damn key code and find some damn *.conf file and manually fix my problem. So, do an application like this exist in Linux?

    Read the article

  • Yum through http proxy

    - by eodchop
    I have several Fedora 13 servers that have to connect through an http proxy for yum updates. All port 80 traffic has to be routed through this proxy. I have setup the proxy server in the network settings GUI. I can browse the internet just fine. I have also setup my proxy information in /etc/yum.conf as follows: proxy=http:proxy.largecorp.corp/accelerated_pac_base.pac proxy_user=user proxy_password=password I then added the export HTTP_PROXY="http:proxy.largecorp.corp/accelerated_pac_base.pac" to /etc/bashrc and sourced the file. When i run yum update: Loaded plugins:presto, refresh-packagekit Error: Cannot retrieve repository metadata (repomd.xml) fro repository: fedora. Please verify its path and try again. All of the repo urls are the defaults, as this is a fresh install.

    Read the article

  • CentOS Can't connect to FTP

    - by Steven
    I'm having troubles connecting to my ftp server. Here's what it says, Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/home/sxxxn" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing My vsftpd.conf file: local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES ftpd_banner=Welcome to xxxx.com xferlog_std_format=NO chroot_local_user=NO chroot_list_enable=NO chroot_list_file=/etc/vsftpd/chroot_list listen=YES pasv_enable=YES pasv_min_port=3000 pasv_max_port=3050 pasv_address=64.xx.xx.xxx pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO userlist_file=/etc/vsftpd/vsftpd.userlist And I've got these 2 in my iptables -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3000:3050 -j ACCEPT I've also disabled selinux.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >