Search Results

Search found 26114 results on 1045 pages for 'zend config xml'.

Page 659/1045 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • ProFTPD pam_ecryptfs: Error getting passwd

    - by Olirav
    proftpd: pam_ecryptfs: Error getting passwd info for user [USERNAME] I am getting this error in the syslog nearly every time any user connects via FTP, the user is able to connect and the session seems to continue without a hitch. ProFTPD.log shows no error, this warning only show in the syslog. My VPS is running Ubuntu 11.10 and Proftpd 1.3.4rc2 from the Ubuntu Repo, I have made only a few changes to the config (no weird auth methods). This has been going on for quite a while but I can't quite find the cause. Anyone got any ideas?

    Read the article

  • local .pac-file URL format that works with IE and Safari (Windows)?

    - by legr3c
    Say I want to use a proxy auto-config file that is stored at C:\proxy.pac. To make Internet Explorer use this configuration I have to specify the pac-file in the LAN settings in the following way: file://C:/proxy.pac But Safari, that uses the same proxy settings, will ignore it in this case. To make Safari use the pac-file I have to reference it as file:///C:/proxy.pac (3 slashes at the beginning) which, according to Wikipedia is the correct format. But this way Internet Explorer will ignore it. Opera and Chrome, that also use the same proxy settings, are fine with both ways but is there another option that will work with Safari and Internet Explorer at the same time?

    Read the article

  • How to invoke a command using specific proxy server?

    - by Xiè Jìléi
    Some applications support proxy (http proxy or socks proxy), and some are not. For browsers, I can specify proxy server in the preferences/options dialog, and other applications may be able to configure proxy servers in config files. For general purpose, can I invoke a command using a specific proxy? Like following: $ proxy-exec --type sock5 --server 1.2.3.4:8000 -- wget/ftp ... I'm using Ubuntu Maverick. P.S. In win32, it can be implemented by hijacking the socket dlls, maybe, I'm not familiar with Linux programming, but I guess it's possible in Linux. though.

    Read the article

  • Toshiba Equium A110-252 laptop won't boot Windows.

    - by Drew Gibson
    I have a Toshiba Equium A110-252 laptop (XP Pro) which stopped booting a couple of weeks ago. The symptoms are that the laptop would display its DOS Toshiba splash screen, then the Windows (XP) boot screen would display, then a very brief blue screen flash, then back to the boot options screen (offering safe mode, safe mode with networking etc, last known good config, etc... none of these work). I assumed a duff HD, and swapped it for a new one and reinstalled from a TrueImage backup. This worked until I installed windows updates I think (the backup was a few months old), and now the PC is doing the same thing. I have installed a third, old HD (60GB) with Ubuntu 11.04 on it, and everything is fine. PC boots and runs Ubuntu beautifully. I need Windows running on it though ! Given these symptoms, could it a windows update issue ? Or might I have a hardware fault still ?

    Read the article

  • How to whitelist a user agent for nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password prompt. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • How do I prevent internet access to a group of computers on my network?

    - by Kevin Boyd
    Well I have the following setup... Computer A , B and C are networked.... Computer A is connected to the internet, computer B and C are not setup for internet access currently but I guess its possible with some kind of setting they would eventually be able to access the internet and this is what I would like to prevent. In summary only A should have internet access while A and B and C should still be on intranet. Is this kind of config possible?, what kind of software or setup or tools would I need to achive this?

    Read the article

  • having trouble setting up ganglia on three machines

    - by Pieter Breed
    I am running ubuntu 11.10 I have one machine with gmetad, gmond and ganglia-webinterface. When I browse the web interface this machine picks up the local gmond output. I then added another machine, running only gmond. I didn't really change anything in the config, only the name of the cluster. This machine's output showed up in the web view. The I tried to add a third machine, similarly to the second, but it's not showing up in the web view. I tried looking at syslog and running as a daemon, but I'm not seeing anything suspicious there. Any tips for trouble shooting this?

    Read the article

  • Ubuntu from console/command-line/shell

    - by Xolve
    Earlies linux distros though required lot of manual work they were quite good to use from commandline. If the X-server didn't start or you just want a shell to work they all supported. Network was configured by init; sound was up and ready; new devices inserted would be configured and their configureation was placed in fstab. Also there were small scripts I found on many distros which on X used windows while on console they switched to ncurses. But now this all needs GUI with a desktop manager (KDE, GNOME) for the new paradigms :'-( require GUI (NetworkManger, hal etc.). So if on just command line you have to be root, looks like they believe only geeky admins need that, and need to edit config files or type big commands. Any way so that this is easy in Ubnubtu through shell again.

    Read the article

  • Allowing outbound traffic with APF/iptables for OpenVZ container

    - by David
    I have apf installed on a OpenVZ container (proxmox 2.1). The config is pretty much vanilla and things are working. My external services like ssh and http are working. My problem is that all outbound traffic on http/https is blocked. How do I allow all outbound traffic for http/https. If I change EGF to 1 like this, all inbound and outbound traffic gets blocked EGF="1" EG_TCP_CPORTS="21,25,80,443,43,53" EG_UDP_CPORTS="20,21,53" EG_ICMP_TYPES="all" I opened a single outbound rule with the following # /usr/local/sbin/apf -a downloads.wordpress.org How do I allow all outbound traffic on http/https without blocking all traffic? Why would I allow all inbound ssh/http traffic and block all outbound traffic?

    Read the article

  • Make Safari 5's location bar more like Omnibox or AwesomeBar

    - by Lri
    When searching for history or favorites, the search phrase has to be an exact substring of the URL or title. For example super awesome wouldn't match this page. Can the criteria be made more liberal? When an item that was matched by its title is selected from the suggestion list, the title is filled in in place of the URL. The filled in part sometimes starts from the middle of a URL or a title. Can either of these behaviors be changed? Can you redirect unresolved addresses to the default search engine or a custom URL? In Firefox you can go to about:config and set keyword.URL to http://www.google.com/search?btnI&q=. Can you remove or hide the web search field? In Camino, Cruz, and Fluid it can be resized to zero width.

    Read the article

  • linux ftp server with virtual users

    - by kjertil
    i know there are already similar questions for this matter but the answers doesn't really make much sense to anyone who is not really technically comfortable in Linux. I've already tried articles like these for example: http://howto.gumph.org/content/setup-virtual-users-and-directories-in-vsftpd/ with the result of accidently breaking the whole system. The problem is that, while there are several technical possibilities to set up virtual users with a FTP server, it is not as easy as managing for instance a Filezilla server on Windows. I've seen some Web based GUI's but most of them seems to be out of date. The different flavours of Linux and the large amount of different popular FTP servers also seems to make the matter more complicated. I guess my question is, is there a way, to set up virtual FTP users on Linux without the hastle of having to manually edit PAM, MYSQL and config files?

    Read the article

  • Windows VPN - NO internet access

    - by sharru
    I host a network of servers behind a Fortigate 200a firewall in the DC. I connect to those servers via a VPN connection. The problem is that when i connect to the VPN, I lose my internet connection on the local PC (windows 7). I would like to be connected to the VPN and still surf the web. i guess this means to only forward a range of ip to the VPN connection. I've read other answers on serverfault, talking about "un-check the 'Use default gateway on remote network' option in your Windows 7 PPTP network connection settings". When i do that , i get internet access but no access to the servers in the VPN. Any idea how to get both working? Should i change something on the fortigate 200a config? Do i need two networks cards? Is there a place in windows to define ip range for the vpn connection?

    Read the article

  • Where does gcc keep its built-in include directory paths

    - by Charles
    GCC has built in include directories for certain standard headers. I just need to know where this list is. My newly compiled gcc will not compile my little test C++ program because it cannot find standard headers. I think it fails because of some config options I used to make my file system more organized. I set the bindir and libdir, which I think might have screwed up the built-in include paths for some reason. Program (dummy.c): #include <iostream> void main(){} Command: g++ dummy.c Error: dummy.c:1:20: fatal error: iostream: No such file or directory

    Read the article

  • How to setup server to accept pem(private RSA key) login w/o password like EC2?

    - by Chandler.Huang
    I am manage a group of VM and I need to setup all vm create a ssh tunnel to a specific host A. One way to do this is append public key of each VM to host's authorized_keys, but I guess I have to do the append each time i create a VM. So I am trying to config host A to accept pem or private key login without passowrd, just like EC2, client can use "ssh -i PEM" to login host A. But I have tried in vain for hours. I create a rsa public/private key and let VM use the private key to login, no matter what I do, host a still ask for password. Is there anything I missed ? Thanks.

    Read the article

  • Proper way to partition filesystem with Xen

    - by luckytaxi
    I'm coming from a vmware environment, wanting to play with Xen. I have a server with 2 x 500G SATA drives (no hardware RAID available, have to use software-based RAID1). My partitions are all RAID1 except for swap. I left a little over 400G for my VMs and I would like to use LVM for the disk images. For domU's swap, should I allocate that from the 400G or should that be coming from dom0's partition? I asked because I've seen numerous config options that shows either or.

    Read the article

  • What is the cleanest way to upgrade Fedora and also my individual installs while keeping /home?

    - by Don
    I am a professional programmer, using Fedora 10 (and a host of other packages individually installed). I use my system to telecommute. Every year or so, I go through the ritual dance, usually with a second computer and a KVM switch as I don't have office space for two monitors, to build the next version of Fedora and install all my favorite apps. Is there a better way? At least a nice way to keep track of what I need to 'add on' so that I don't have to manually install my app collection? Also, I keep /home on a separate raid-ed drive set so I can also fall prey to 'old-config-file-itis'.

    Read the article

  • Nagios remote monitoring: NRPE Vs. SSH

    - by sam
    We use Nagios to monitor quite a few (~130) servers. We monitor CPU, Disk, RAM and a few other things on each server. I've always used SSH to run the remote commands, purely because it requires little to no additional config on the remote server, just install nagios-plugins, create the nagios user and add the SSH key, all of which I've automated into a shell script. I've never actually considered the performance implications of using SSH over NRPE. I'm not too bothered about the load hit on the Nagios server (It's probably over-speced for what it does, it's never been over 10% CPU), but we run each remote check every 30 seconds and each server has 5 different checks performed. I assume SSH requires more resources for each check but is there a huge difference? (I.E. enough of a difference to warrant the switch to NRPE). If it's any help, we monitor a mix of physical servers (Normally with 8, 12 or 16 physical cores) and Amazon EC2 medium/large instances.

    Read the article

  • CentOS-Like configuration of Vim on Ubuntu

    - by matejkramny
    Whenever I use a CentOS System, there's always a pleasant configuration of vim which mainly does the following: Remembering position of closed files Colour mode (!!!) Bash has colours There's lots more, just not something i can recall on the spot. Then, i go to ubuntu and its all black and white, no nice vim config etc. I have to use Ubuntu, and I hate ubuntu because of this. I know I can all configure it by myself, so my question is: How can I configure the Ubuntu system to behave (aesthetically) like CentOS? PS to future self: I will be stoned to death for asking such a question.

    Read the article

  • Is there a way to reliably backup and restore a complex network configuration on Windows XP?

    - by djangofan
    I have some Windows XP laptops (10+) that host a ad-hoc WIFI network connection to wireless PDA devices. The laptop itself is connected via a 3rd party VPN radio network. The radio network itself seems to be reliable. If one small thing goes wrong with the network configuration then the PDA loses connectivity and so I need a way to backup a networking config , either via a script or a 3rd party program, so that I can restore a working network configuration if something goes wrong. Is this possible? Does anyone have any ideas?

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • SSH only works after intentionally failed password

    - by pyraz
    So, I'm having a rather weird problem. I have a server, that when I try to SSH into, immediately closes the connection if I type in the correct password on the first attempt. However, if I purposefully enter a wrong password on the first attempt, and then enter a correct password at the second or third prompt, it successfully logs me into the computer. Similarly, when I try to use public key authentication, I get an immediate closed connection. If, however, I enter a wrong password for my key file, followed by another wrong password once it reverts to password authentication, I can successfully log in as long as I provide the correct password at the second or third prompt. The machine is running Red Hat Enterprise Linux Server release 6.2 (Santiago), and is using LDAP and PAM for authentication. Any ideas on where to start debugging this one? Let me know what config files I need to provide and I'll be happy to do so.

    Read the article

  • Double try_files to solve the nginx's "No input file specified" issue

    - by Howard
    I am following the nginx's wiki (http://wiki.nginx.org/WordPress) to setup my wordpress location / { try_files $uri $uri/ /index.php?$args; } By using the above lines, when a static file which is not found it will redirect to index.php of wordpress, that is okay but.. Problem: When I request an non-existence php script, e.g. http://www.example.com/foo.php, nginx will give me No input file specified I want nginx to return 404 instead of the above message, so in the main fcgi config, I add the 2nd try_files location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include /etc/nginx/fastcgi_params; ... } And this worked, but I am looking if there are any better way to handle it?

    Read the article

  • Proxy HTTPS requests to a HTTP backend with NGINX.

    - by Mike
    I have nginx configured to be my externally visible webserver which talks to a backend over HTTP. The scenario I want to achieve is: Client makes HTTPS request to nginx nginx proxies request over HTTP to the backend nginx receives response from backend over HTTP. nginx passes this back to the client over HTTPS My current config (where backend is configured correctly) is: server { listen 80; server_name localhost; location ~ .* { proxy_pass http://backend; proxy_redirect http://backend https://$host; proxy_set_header Host $host; } } My problem is the response to the client (step 4) is sent over HTTP not HTTPS. Any ideas?

    Read the article

  • sudo suddenly stopped working on debian

    - by chovy
    I've been using 'sudo ' since I setup my server about a week ago. It suddently stopped working with no explanation. I am in 'sudo' group. So there should be no config change required to /etc/sudoers $ sudo apt-get install tsocks [sudo] password for me: me is not in the sudoers file. root@host:/etc# groups me me : me sudo The only thing it could possibly be related to was I added the following line to sshd_config: PermitRootLogin without-password But I have since changed that back to PermitRootLogin yes Permission on file is 400: ls -l /etc/sudoers -r--r----- 1 root root 491 Sep 28 21:52 /etc/sudoers No idea why it stopped working, or how to fix it.

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >