Search Results

Search found 41582 results on 1664 pages for 'fault tolerance'.

Page 206/1664 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • haproxy and tomcat intermittent hangs

    - by user7347
    I am trying to run haproxy in front of tomcat on a Solaris x86 box, but I am getting intermittent failures. At seemingly random intervals, the request just hangs until haproxy times out the connection. I thought maybe it was my app, but I've been able to reproduce it with the tomcat manager app, and hitting tomcat directly there is no problems at all. Hitting it repeatedly with curl will cause the error within 10-15 tries curl -ikL http://admin:admin@<my server>:81/manager/status haproxy is running on port 81, tomcat on port 7000. haproxy returns a 504 gateway timeout to the client, and puts this into the log file: Sep 7 21:39:53 localhost haproxy[16887]: xxx.xxx.xxx.xxx:65168 [07/Sep/2009:21:39:23.005] http_proxy http_proxy/tomcat7000 5/0/0/-1/30014 504 194 - - sHNN 0/0/0/0/0 0/0 "GET /manager/status HTTP/1.1" Tomcat shows nothing, no error in the logs and no indication that the request ever makes it to the tomcat server. The request count is not incremented, the manager app only shows activity on one thread, serving up the manager app. Here are my haproxy and tomcat connector settings, I've been playing with both a good deal trying to chase down the issue, so they may not be ideal, but they definitely don't seem like they should cause this error. server.xml <Connector port="7000" protocol="HTTP/1.1" enableLookups="false" maxKeepAliveRequests="1" connectionLinger="10" /> haproxy config global log loghost local0 chroot /var/haproxy listen http_proxy :81 mode http log global option httplog option httpclose clitimeout 150000 srvtimeout 30000 contimeout 3000 balance roundrobin cookie SERVERID insert server tomcat7000 127.0.0.1:7000 cookie server00 check inter 2000

    Read the article

  • Configuring OpenLDAP as a Active Directory Proxy

    - by vadensumbra
    We try to set up an Active Directory server for company-wide authentication. Some of the servers that should authenticate against the AD are placed in a DMZ, so we thought of using a LDAP-server as a proxy, so that only 1 server in the DMZ has to connect to the LAN where the AD-server is placed). With some googling it was no problem to configure the slapd (see slapd.conf below) and it seemed to work when using the ldapsearch tool, so we tried to use it in apache2 htaccess to authenticate the user over the LDAP-proxy. And here comes the problem: We found out the username in the AD is stored in the attribute 'sAMAccountName' so we configured it in .htaccess (see below) but the login didn't work. In the syslog we found out that the filter for the ldapsearch was not (like it should be) '(&(objectClass=*)(sAMAccountName=authtest01))' but '(&(objectClass=*)(?=undefined))' which we found out is slapd's way to show that the attribute do not exists or the value is syntactically wrong for this attribute. We thought of a missing schema and found the microsoft.schema (and the .std / .ext ones of it) and tried to include them in the slapd.conf. Which does not work. We found no working schemata so we just picked out the part about the sAMAccountName and build a microsoft.minimal.schema (see below) that we included. Now we get the more precise log in the syslog: Jun 16 13:32:04 breauthsrv01 slapd[21229]: get_ava: illegal value for attributeType sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH base="ou=oraise,dc=int,dc=oraise,dc=de" scope=2 deref=3 filter="(&(objectClass=\*)(?sAMAccountName=authtest01))" Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SRCH attr=sAMAccountName Jun 16 13:32:04 breauthsrv01 slapd[21229]: conn=0 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Using our Apache htaccess directly with the AD via LDAP works though. Anyone got a working setup? Thanks for any help in advance: slapd.conf: allow bind_v2 include /etc/ldap/schema/core.schema ... include /etc/ldap/schema/microsoft.minimal.schema ... backend ldap database ldap suffix "ou=xxx,dc=int,dc=xxx,dc=de" uri "ldap://80.156.177.161:389" acl-bind bindmethod=simple binddn="CN=authtest01,ou=GPO-Test,ou=xxx,dc=int,dc=xxx,dc=de" credentials=xxxxx .htaccess: AuthBasicProvider ldap AuthType basic AuthName "AuthTest" AuthLDAPURL "ldap://breauthsrv01.xxx.de:389/OU=xxx,DC=int,DC=xxx,DC=de?sAMAccountName?sub" AuthzLDAPAuthoritative On AuthLDAPGroupAttribute member AuthLDAPBindDN CN=authtest02,OU=GPO-Test,OU=xxx,DC=int,DC=xxx,DC=de AuthLDAPBindPassword test123 Require valid-user microsoft.minimal.schema: attributetype ( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' SYNTAX '1.3.6.1.4.1.1466.115.121.1.15' SINGLE-VALUE )

    Read the article

  • How to achieve zero down time

    - by Hiral Lakdavala
    For an application we want to achieve zero database and application down time using Active Active configuration. Our dB is Oracle Following are my questions: How can we achieve active active configuration in Oracle? Will introducing Cassandra/HBase(or any other No SQL dbs) cloud help in zero downtime or it is only for fast retrieval of data in a large db? Any other options? Thanks and Regards, Hiral

    Read the article

  • Network cabling with multiple patch panels?

    - by dannymcc
    I am in the very early stages of planning a network cabling upgrade in our office, mainly to upgrade the old cables from Cat5 to either 5e or 6. I am also planning on upgrading all of our 10/100 switches to 10/100/1000 switches. I would like to have three small wall mounted cabinets spread around the building, each with a patch panel and switch. These would all lead back to our server room. The question is; should I have two patch panels in each wall cabinet, one with 24 or 48 ports that are connected to a matching patch panel in the server room. The second patch panel would then link to each device in that cabinets area. Then I wouldn't put a switch in the small cabinets. All switching would be done in the server room. Or, should I have one main cable from the server room to each of the cabinets - plugged straight into the switch and the patch panel is for devices in the cabinets area? I hope that makes sense!

    Read the article

  • Make my git user and apache user have read/write/delete access

    - by Mr A
    I am having permission problems on my server. I use user developer to pull my git repository on the server. Then apache uses its own apache user to do write and execute code. I always have the problems when the app wants to write something in the directory (i.e: log files, and cache ...) if I execute a cron job and it uses my developer rights and wants to add something to the folders that is written by apache. My question is how to have my developer have the same write/delete access as my apache and avoid permission conflicts with each other? I am not fluent on linux command so, it would help if you could provide links or simply examples of doing so. thanks.

    Read the article

  • nginx does not use variables set in /etc/environment on system reboot, but does when restarted from shell

    - by Dave Nolan
    I have a Rails app running on nginx/passenger. It restarts happily in a shell using sudo /etc/init.d/nginx stop|start|restart. But Passenger throws an error when the system is rebooted: "Missing the Rails #{version} gem". But GEM_HOME and GEM_PATH are both set in /etc/environment so surely they would be available to all processes during reboot? /etc/environment PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/games" GEM_HOME=/var/lib/gems/1.8 GEM_PATH=/var/lib/gems/1.8 /etc/init.d/nginx #! /bin/sh ### BEGIN INIT INFO # Provides: nginx # Required-Start: $all # Required-Stop: $all # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: starts the nginx web server # Description: starts nginx using start-stop-daemon ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/opt/nginx/sbin/nginx NAME=nginx DESC=nginx test -x $DAEMON || exit 0 # Include nginx defaults if available if [ -f /etc/default/nginx ] ; then . /etc/default/nginx fi set -e case "$1" in start) echo -n "Starting $DESC: " start-stop-daemon --start --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; restart|force-reload) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON sleep 1 start-stop-daemon --start --quiet --pidfile \ /var/log/nginx/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; reload) echo -n "Reloading $DESC configuration: " start-stop-daemon --stop --signal HUP --quiet --pidfile /var/log/nginx/$NAME.pid \ --exec $DAEMON echo "$NAME." ;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|restart|force-reload}" >&2 exit 1 ;; esac exit 0 $ opt/nginx/sbin/nginx -v nginx version: nginx/0.7.67 Ubuntu lucid

    Read the article

  • cPanel Virtfs won't umount

    - by JPerkSter
    Anyone have any experience with virtfs on cPanel servers? I can't seem to get them to unmount, as they say they are already unmounted: [root@Server ~]# cat /proc/mounts | grep user /dev/root /home/virtfs/user/lib ext3 rw,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/opt ext3 rw,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/sbin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/man ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/X11R6 ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/kerberos ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/libexec ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/bin ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/share ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/Zend ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/IonCube ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/include ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda3 /home/virtfs/user/usr/local/lib ext3 rw,nodev,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/spool ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/lib ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/cpanel ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/run ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda2 /home/virtfs/user/var/log ext3 rw,nodev,noatime,nodiratime,errors=continue,data=ordered 0 0 /dev/sda6 /home/virtfs/user/tmp ext3 rw,nosuid,nodev,noexec,noatime,errors=continue,data=ordered 0 0 /dev/root /home/virtfs/user/bin ext3 rw,errors=continue,data=ordered 0 0 [root@Server ~]# for i in cat /proc/mounts |grep virtfs |grep user |awk '{print$2}'; do umount $i; done umount: /home/virtfs/user/lib: not mounted umount: /home/virtfs/user/opt: not mounted umount: /home/virtfs/user/usr/lib: not mounted umount: /home/virtfs/user/usr/sbin: not mounted umount: /home/virtfs/user/usr/share: not mounted umount: /home/virtfs/user/usr/bin: not mounted umount: /home/virtfs/user/usr/man: not mounted umount: /home/virtfs/user/usr/X11R6: not mounted umount: /home/virtfs/user/usr/kerberos: not mounted umount: /home/virtfs/user/usr/libexec: not mounted umount: /home/virtfs/user/usr/local/bin: not mounted umount: /home/virtfs/user/usr/local/share: not mounted umount: /home/virtfs/user/usr/local/Zend: not mounted umount: /home/virtfs/user/usr/local/IonCube: not mounted umount: /home/virtfs/user/usr/include: not mounted umount: /home/virtfs/user/usr/local/lib: not mounted umount: /home/virtfs/user/var/spool: not mounted umount: /home/virtfs/user/var/lib: not mounted umount: /home/virtfs/user/var/cpanel: not mounted umount: /home/virtfs/user/var/run: not mounted umount: /home/virtfs/user/var/log: not mounted umount: /home/virtfs/user/tmp: not mounted umount: /home/virtfs/user/bin: not mounted umount: /home/virtfs/user/dev: not mounted umount: /home/virtfs/user/proc: not mounted

    Read the article

  • TinyDNS and proper settings for SPF records

    - by Teddy
    I've inherited a TinyDNS configuration that have following entries for SPF: @domain.com:x.x.x.3:a::86400 @domain.com:x.x.x.103:c:10:86400 =domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.3:86400 =mail.domain.com:x.x.x.103:86400 'domain.com:v=spf1 ip4\072x.x.x.3 ip4\07231.130.96.103 ptr\072mail.domain.com +mx a -all:3600 'mail.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 'a.mx.domain.com:v=spf1 ip4\072x.x.x.3 ip4\072x.x.x.103 ptr\072mail.domain.com +mx a -all:3600 This is the result from http://www.kitterman.com/spf/validate.html SPF record lookup and validation for: domain.com SPF records are primarily published in DNS as TXT records. The TXT records found for your domain are: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all SPF records should also be published in DNS as type SPF records. No type SPF records found. Checking to see if there is a valid SPF record. Found v=spf1 record for domain.com: v=spf1 ip4:x.x.x.3 ip4:x.x.x.103 ptr:mail.domain.com +mx a -all evaluating... SPF record passed validation test with pySPF (Python SPF library)! I'm struggling with this from yesterday and cant figure it why this validator returns No type SPF records found. I see in BIND we cand define SPF type record with example.com. IN SPF "v=spf1 a -all", but in TinyDNS we only have TXT records that we set for SPF, maybe this is a problem?

    Read the article

  • Dell printer goes offline after second print job.

    - by Ac0ua
    Dell printer goes offline(server connection) after second print job. Although the printer's display says it is ready. If you turn off then on the printer you can send one job and goes back offline(server connection) on the next print job. We have multiple Dell 2330dn printers installed through a print server, only one of the printers is experiencing this problem. Two different users. Two different machines. Two different operating systems (win7 and Vista). The computers have been reset. Dell printers have web interface if this helps (through IP address). Thanks for any help!

    Read the article

  • Perforce Restore From Multiple Checkpoint Files?

    - by AJ
    Hi all, I am working with a very large (~11GB) checkpoint file and trying to do a -jr (journal restore) operation. About half way through the file, I'm hitting an entry which causes an error to occur. I'm unable to come up with a conventional way to print, edit, and save changes to the offending line. So right now I'm splitting the checkpoint into files of 500k lines each...up to 47 files and counting. My question is, once I have these separate files: Can I run journal restore on each one separately to check for errors? Once fixed, is it necessary to merge them back together again to do my full journal restore? Any other ideas on how to tackle this problem would be appreciated. Thanks in advance, -aj

    Read the article

  • Two hosting providers running simultaneously... possible / not possible? good practice / unnecessary?

    - by user29600
    For the sake of their reputation, I won't mention the names. But I'll just use: Business I worked for previously - ABC Web Dev Hosting company they used - XYZ Hosting I recently found out that XYZ Hosting had some sort of incident where they ended up losing a lot of their client's data - including ABC Web Dev's. ABC Web Dev was able to recover some of their customer's websites, after pulling them from their local development computers and putting them up on another hosting provider. They ended up losing a lot of clients because of it and their reputation ruined. I'm starting my own web dev company and I don't want to run into this same issue. I'm planning on using Rackspace but, although they are a great company, according to wikipedia they still have had downtime in their past. I thought it might be a good idea to try to run two providers at once, to ensure that if anything happened in one the websites would still be live because of the other. I know the websites would have to be pulling from one server at all times, but if there's a way to redirect requests to the second server if the first one is down that would solve my issue. As a note, we will have a staging environment setup locally which will allow for quick recovery if a provider did have any issues, however I'd like to avoid any downtime at all if possible. So my questions are: Has anyone tried running two providers simultaneously? Would this be considered good practice or am I going too far? Is there really any way to run two simultaneously where one server acts as a backup?

    Read the article

  • Copy File Contiguously to Disk from OSX/Unix/Linux to FAT32 FS?

    - by alharaka
    So the Sysinternals guys have that cool contig.exe utility that allows me ensure a file is contiguous. I need to copy overs ISO files to a FAT32 USB flash key. Grub4DOS requires the files be continuous, but I do not have Windows access at the moment. Is there a way to copy a file so it is contiguous on the target drive, or a tool like the aforementioned that will make an existing file contiguous. Again, I need it on FAT32, and there lies the rub.

    Read the article

  • XDEBUG/PHP doesn't dump profile even when set up properly?

    - by John D.
    I installed xdebug from source, but also tried my package manager (separately) and they both are loaded correctly (verified by restarting Apache and seeing the xdebug copyright info in phpinfo()) but they do not dump profiling information. Out of the 40 different attempts of configuration it logged once or twice but I lost what I did, I tried with first only loading the module in php.ini with no settings, but it didn't log to /tmp/. I tried many different settings but my current is now: xdebug.profiler_enable = Off xdebug.profiler_enable_trigger = 1 xdebug.profiler_output_dir = "/tmp/" xdebug.profiler_output_name = "profiler.%t" Of course I call my script through 127.0.0.1/test.php?XDEBUG_PROFILE, which is for enable_trigger. Do you know why it would not dump profiler information? nobody (Arch Linux) can write to /tmp/ as it has before, so I'm sure it is not a permissions error. Apache's error_log does not tell me anything about xdebug either, as it has loaded correctly. It just does not "work"! EDIT: I made a subfolder "xdebug_profiles" in /tmp/ and chown'ed it to nobody, and now it works flawlessly. I'm not sure why it couldn't write before, I guess it's just a caveat with nobody on Arch. I answered my own question , not enough points to answer it or comment, so consider this answered.

    Read the article

  • Connectivity issues with dual NIC machine in EC2

    - by Matt Sieker
    I'm trying to get some servers set up in EC2 in a Virtual Private Cloud. To do this, I have two subnets: 10.0.42.0/24 - Public subnet 10.0.83.0/24 - Private subnet To bridge these two, I have a Funtoo instance with a pair of NICs: eth0 10.0.42.10 eth1 10.0.83.10 Which has the following routing table: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.83.0 * 255.255.255.0 U 0 0 0 eth1 10.0.83.0 * 255.255.255.0 U 203 0 0 eth1 10.0.42.0 * 255.255.255.0 U 202 0 0 eth0 loopback * 255.0.0.0 U 0 0 0 lo default 10.0.42.1 0.0.0.0 UG 0 0 0 eth0 default 10.0.42.1 0.0.0.0 UG 202 0 0 eth0 An elastic IP is attached to the eth0 interface, and I can connect to it fine remotely. However, I cannot ping anything in the 10.0.83.0 subnet. For now iptables is not set up on the box, so there's no rules that would get in the way (Eventually this will be managed by Shorewall, but I should get basic connectivity done first) Subnet details from the VPC interface: CIDR: 10.0.83.0/24 Destination Target 10.0.0.0/16 local 0.0.0.0/0 [ID of eth1 on NAT box] Network ACL: Default Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY   CIDR: 10.0.83.0/24 VPC: Destination Target 10.0.0.0/16 local 0.0.0.0/0 [Internet Gateway ID] Network ACL: Default (replace) Inbound: Rule # Port (Service) Protocol Source Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY Outbound: Rule # Port (Service) Protocol Destination Allow/Deny 100 ALL ALL 0.0.0.0/0 ALLOW * ALL ALL 0.0.0.0/0 DENY I've been trying to work this out most of the evening, but I'm just stuck. I'm either missing something obvious, or am doing something very wrong. I would think I'd be able to ping from either interface on this box without issue. Hopefully some more pairs of eyes on this configuration will help. EDIT: I am an idiot. After I bothered to install nmap to run some more tests, I discover I can see the ports, and connect to them, pings are just being blocked.

    Read the article

  • Invalidating unused ssh keys

    - by JH
    I am using one ssh account for all my Subversion users. They send me their public keys and I put them in .ssh/authorized_key of the svn account, then they can check out the code from Subversion using ssh tunnel. So far everything works fine. The problem though is that I want to invalidate keys that have not been used for some time (say one month). Does anyone know a way to make sshd log the public key when a user signs in? Thanks.

    Read the article

  • Website pages very slow to load on first access

    - by Merianos Nikos
    I have a production web server that makes sites to be very slow when load for first time. After the first time all connections are normal and fast. In the following screen you can see the first test I have run for a random site in my server: and here you can see the result of the second time request: As you can see there is a big change in the second load of the same page. Here is the page load time graph: My problem is that I don't really know anything about Red Had Linux server, and also I don't know where can I start. Can somebody to help me ? I like to find out the solution for the long time "wait" in connection. I know that my question is very minimal, but you can always ask me to give you information about the server.

    Read the article

  • Amazon ec2 - WildCard Sub-Domain

    - by Sharanc25
    I'm running an ec2 instance on ubuntu running lamp stack. I configured my httpd.conf file to support wildcard sub-domain but it didn't work. My httpd.conf file NameVirtualHost * <VirtualHost *> DocumentRoot /www/example ServerName example.com ServerAlias *.example.com </VirtualHost> I tried all possible solutions but they didn't work. Finally I used amazon Route-53 to setup a wildcard DNS to redirect all *.example.com to example.com. My question is, Is it okay if I use Route-53 instead of httpd.conf file for wildcard Sub-Domain ? Is there an error in my httpd.conf file ? (Note: I used the same httpd.conf settings with another hosting provider and it worked perfectly there.) Additional Information : VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:80 is a NameVirtualHost default server example.com (/etc/apache2/httpd.conf:1) port 80 namevhost example.com (/etc/apache2/httpd.conf:1) port 80 namevhost ip-xx-xxx-xx-xxx.ec2.internal (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • Cant install KB 980368 "The update is not applicable to your computer"

    - by JK01
    I'm trying to install KB 980368 A update is available that enables certain IIS 7.0 or IIS 7.5 handlers to handle requests whose URLs do not end with a period on a new Windows 2008 R2 server, but no matter which of the download packages I try, they all say "The update is not applicable to your computer" I have Windows 2008 R2 Standard on an Intel Xeon E5520. I need that KB to have extenstionless URLs in ASP.NET MVC2. How can I fix this?

    Read the article

  • Create restricted user on Debian server

    - by James Willson
    I want to create a user account for each of the key programs installed on my debian server. For example, for the following programs: Tomcat Nginx Supervisor PostgreSQL This seems to be recommended based on my reading online. However, I want to restrict these user accounts as much as possible, so that they dont have a shell login, dont have access to the other programs and are as limited as possible but still functional. Would anyone mind telling me how this could be achieved? My reading so far suggests this: echo "/usr/sbin/nologin" >> /etc/shells useradd -s /usr/sbin/nologin tomcat But I think there may be a more complete way of doing it. EDIT: I'm using debian squeeze

    Read the article

  • Apache and mod_authn_dbd DBDPrepareSQL error

    - by David Brown
    I am running Apache 2.2.17 on Suse Linux 11. I have installed the mod_authn_dbd module to allow authentication using a MySQL database. Simple digest authentication works absolutely fine using the following directive: AuthDBDUserRealmQuery "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" However, I would like to both generalize this statement and prepare it for performance and security reasons. Thus, I used the following directives instead: DBDPrepareSQL "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" digestLogin AuthDBDUserRealmQuery digestLogin These directives generate the following errors: [...] [error] (20014)Internal error: DBD: failed to prepare SQL statements: [...] [error] (20014)Internal error: DBD: failed to initialise Why does my actual SQL statement work when used directly, but not when I try to prepare it? (Note I have tried using '?' and '%%' in place of '%', to no effect.)

    Read the article

  • Apache 2.2 Present rss http 410 pages as application/rss+xml content type

    - by Mark Bakker
    I have a problem sending http-410 for very old rss feeds. Functional this can happen in one Very old rss feeds where content is not updated anymore / subject could not move to another feed Migration from 3th party site to our site where the rss feed is not longer functional supported I tried several things in my site config see below; <VirtualHost *:80> DocumentRoot /opt/tomcat/webapps/ROOT/ ErrorDocument 500 /error/static/error-500.html ErrorDocument 503 /error/static/error-500.html ErrorDocument 404 /error/static/rss/error-404.html ErrorDocument 410 /error/static/rss/error-410.html # When error pages need to be served by apache, # exclude the files to serve as below (in comment) SetEnvIf Request_URI "/error/static/*" no-jk # force all files to be image/gif: <Location *.rss> #<Location *> #ForceType application/rss+xml </Location> #AddType application/rss+xml .rss #AddType application/rss+xml .xml #AddType application/rss+xml .html JkMount /* rss;use_server_errors=402 # JkMount /* rss RewriteEngine on JkMount /news.rss rss JkMount /documenten-en-publicaties.rss rss RewriteEngine on RewriteRule ^/news.rss$ - [NC,T=application/rss+xml,G,L] RewriteRule ^/documenten-en-publicaties.rss$ - [NC,G,L] # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn ErrorLog "|/usr/bin/logger -s -p local3.err -t 'Apache'" CustomLog "|/usr/bin/logger -s -p local2.info -t 'Apache'" combined ServerSignature Off </VirtualHost> The desired end result should be on /news.rss and /documenten-en-publicaties.rss a 410 page with content in the error page with a content type 'application/rss+xml'

    Read the article

  • SMTP Verb Error on MSExchange Server 2003

    - by Jason Adams
    Hi, Every morning for the last two weeks or more I've had to reboot our Exchange Server and often I have to reboot it again during the day. We use a smarthost for sending our mail out and if I view the queues on Exhange System Manager the Small Business SMTP Connector is in a retry state with "The connection was dropped due to an SMTP protocol event sink". I turned logging up to maximum on ExchangeTransport and the only non-information event in EventViewer is “Message delivery to the host '62.13.128.187' failed while delivering to the remote domain 'mail.authsmtp.com' for the following reason: The connection was dropped due to an SMTP protocol event sink. The SMTP verb which caused the error is 'x-exps'. The response from the remote server is ''.” I stopped using the smarthost during the error condition and all I got was lots of small business connector connections with the same error. I can telnet into mail.authsmtp.com and send a mail during the error state. Any pointers would be gratefully received.

    Read the article

  • varnish demon error: libvarnish.so.1 not found

    - by Max
    In order to try out varnish for an upcoming project I installed it on an ubuntu server using this tutorial: http://varnish-cache.org/wiki/InstallationOnUbuntuDapper The build process worked without any errors, but I cant start the varnish demon. I always get the error message varnishd: error while loading shared libraries: libvarnish.so.1: cannot open shared object file: No such file or directory But /usr/local/lib/libvarnish.so.1 clearly exists. How can I tell varnish to look in that directory and load the library?

    Read the article

  • Is bonding mode=5 a solution against MAC flapping?

    - by Yuri
    There is two are interconnected Cisco WS-2950T. By the one GBIC port on first switch connected a first NIC of bonding interface, and by the one GBIC port on second switch connected a second NIC of bonding interface. Of course the both switches sees the bonding MAC-address only on one interface (eg it is GBIC on first switch) and all incoming traffic for bonding interface passes through this GBIC. But in "mode=5" all outgoing traffic are distributed between the all interfaces that make bond. In this case, the packets will be dropped from the second switch and anyway will going through the first switch? Or the division will be working?

    Read the article

  • WebDAV, SVN, Apache2 on OS X 10.6.2

    - by David
    Hi there, I'm trying to configure SVN server on OS X 10.6.2 via Apache2. Whenever I enable the module in Wev - Settings - Services then try accessing the site, it fails. So I attempted an apachectl configtest and I'm getting a crazy amount of errors: httpd: Syntax error on line 132 of /private/etc/apache2/httpd.conf: Cannot load /usr/libexec/apache2/mod_dav_fs.so into server: dlopen(/usr/libexec/apache2/mod_dav_fs.so, 10): Symbol not found: _dav_add_response\n Referenced from: /usr/libexec/apache2/mode_dev_fs.so\n Expected in: flat namespace\n in /usr/livexec/apache2/mod_dav_fs.so When I attempt disabling the related services and enabling then one by one (dav_fs_module, dav_module, dav_svn_module and authz_svn_mod each seems to fail. What the heck. :-( Thanks in advance, Dave

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >