Search Results

Search found 29636 results on 1186 pages for 'fine uploader'.

Page 795/1186 | < Previous Page | 791 792 793 794 795 796 797 798 799 800 801 802  | Next Page >

  • Apache2 - mod_rewrite : RequestHeader and environment variables

    - by Guillaume
    I try to get the value of the request parameter "authorization" and to store it in the header "Authorization" of the request. The first rewrite rule works fine. In the second rewrite rule the value of $2 does not seem to be stored in the environement variable. As a consequence the request header "Authorization" is empty. Any idea ? Thanks. <VirtualHost *:8010> RewriteLog "/var/apache2/logs/rewrite.log" RewriteLogLevel 9 RewriteEngine On RewriteRule ^/(.*)&authorization=@(.*)@(.*) http://<ip>:<port>/$1&authorization=@$2@$3 [L,P] RewriteRule ^/(.*)&authorization=@(.*)@(.*) - [E=AUTHORIZATION:$2,NE] RequestHeader add "Authorization" "%{AUTHORIZATION}e" </VirtualHost> I need to handle several cases because sometimes parameters are in the path and sometines they are in the query. Depending on the user. This last case fails. The header value for AUTHORIZATION looks empty. # if the query string includes the authorization parameter RewriteCond %{QUERY_STRING} ^(.*)authorization=@(.*)@(.*)$ # keep the value of the parameter in the AUTHORIZATION variable and redirect RewriteRule ^/(.*) http://<ip>:<port>/ [E=AUTHORIZATION:%2,NE,L,P] # add the value of AUTHORIZATION in the header RequestHeader add "Authorization" "%{AUTHORIZATION}e"

    Read the article

  • mercurial hgwebdir error with basicauth in apache2

    - by Dio
    Hello, I'm having kind of a strange error that I'm trying to track down. I was trying to setup mercurial on my home server this weekend. I seem to have it running up to the point where I'm trying to get repositories published correctly. I'm running Ubuntu 10.04 LTS Mercurial Distributed SCM (version 1.4.3) I followed the hgwebdir guide: http://mercurial.selenic.com/wiki/HgWebDirStepByStep and everything seems to work great, I can pull and push my local repositories. Then I tried to add basic auth changing ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None </Directory> to ScriptAliasMatch ^/hg(.*) /var/hg/hgwebdir.cgi$1 <Directory "/var/hg"> Options ExecCGI FollowSymLinks AllowOverride None AuthType Basic AuthName hgwebdir AuthUserFile /usr/local/etc/httpd/users Require valid-user </Directory> This works exactly as I'd expect it to when I navigate to the directory via my web browser, but when I hg push get a long section repeating of File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 855, in http_error_401 url, req, headers) File "/usr/lib/python2.6/urllib2.py", line 833, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "/usr/lib/python2.6/urllib2.py", line 843, in retry_http_basic_auth return self.parent.open(req, timeout=req.timeout) followed by File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 249, in do_open self._start_transaction(h, req) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 419, in _start_transaction return keepalive.HTTPHandler._start_transaction(self, h, req) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 342, in _start_transaction h.endheaders() File "/usr/lib/python2.6/httplib.py", line 904, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 776, in _send_output self.send(msg) File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 247, in _sendfile connection.send(self, data) File "/usr/lib/pymodules/python2.6/mercurial/keepalive.py", line 519, in safesend self.connect() File "/usr/lib/pymodules/python2.6/mercurial/url.py", line 273, in connect keepalive.HTTPConnection.connect(self) RuntimeError: maximum recursion depth exceeded while calling a Python object I'm a bit at a loss on this one. I'm really not sure why adding the authorization seems to work fine via my web browser but throw these errors from hg. Any help would be greatly appreciated.

    Read the article

  • SharePoint 2010 User Profile Synchronization

    - by manemawanna
    Hello, I'm completely new to working with SharePoint and Windows Server, but last week I was given a small brief to play with SharePoint 2010 to see how I got along with it. Anyway I've set up a SharePoint server and had a mess around to get some new sites and pages created etc, but I'm now looking to have a try at importing some AD groups. As part of this I've look at these tutorials, here and here. So far I've got through to the process of starting the User Profile Service which works fine, but when I get it starting the User Profile Synchronization service it sits on starting. But when I refresh the page or go to the monitoring section it shows it as aborted. Now I'm new to administering servers like I say and when I start the User Profile Synchronization service it tries to run as NT AUTHORITY\NETWORK SERVICE and asks for a password so I've been providing it with the admin password, now I'm not sure if this is part of the issue or not as I've checked the log files and they seem to say that it doesn't have permissions, which is fair enough, but I can't see how you can change the account even if I wanted to. So if anyone could help it would be appreciated, if you need any further information to help with an answer, just let me know.

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

  • SSL on nginx + unicorn got "Error 102 (net::ERR_CONNECTION_REFUSED)"

    - by panggi
    I tried to deploy my app on EC2 (opened port: 22, 80, 443) App: Rails 3.2.2 Server: nginx 1.2.1 unicorn gem (latest) ubuntu 12.04 Deployer: Capistrano I tried to follow the instruction in Railscasts : http://railscasts.com/episodes/335-deploying-to-a-vps (Sorry, it's a Pro Episode) Anything fine with normal port 80 http but i got Error 102 after trying to use SSL, here is the nginx.conf content: upstream unicorn { server unix:/tmp/unicorn.frontend.sock fail_timeout=0; } server { server_name beta.sukeru.com; listen 443 default; root /home/deployer/apps/appname/current/public; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_pass http://unicorn; } error_page 500 502 503 504 /500.html; } In production.rb i set: config.force_ssl = true Can anyone give a solution for this? :)

    Read the article

  • MultiPath configuration on RHEL5 and Clariion CX-300

    - by Kamil Z
    I have problem with discovering my FC-connected CX-300 storage. Frankly speaking I'm complete novice in FibreChannel, so step by step explanation would be appreciated. My configuration consist of two IBM HS20 blades with RHEL5.4 on board and 2x Qlogic ISP2422-based 4Gb Fibre Channel HBAs on each blade. As a FC switch there are two Brocades built in BladeCenter Chassis, and finally there is EMC Clariion CX-300. CX300, and Brocade switches should be configured properly, because they were working fine with previous configuration, which main defference was RHEL3 instead RHEL5.4 Below there is my output from several usefull commands: #lspci | grep Fibre 06:01.0 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) 06:01.1 FibreChannle: Qlogic Corp. ISP2422-based 4Gb Fibre Channel to PCI-X HBA (rev 02) #lsmod | grep qla qla2xxx 1084741 0 scsi_transport_fc 37577 1 qla2xxx scsi_mod 141717 10 scsi_dh,qla2xxx,sg,scsi_transport_fc,usb_storage,libata,mptspi,mptscsih,scsi_transport_spi,sd_mod #cat /proc/scsi/scsi Attached Devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: LSILOGIC Model: 1030 IM IM Rev: 1000 Type: Direct-Access ANSI SCSI revision: 02 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 Host: scsi0 Channel: 01 Id: 00 Lun: 00 Vendor: IBM-ESXS Model: ST936701LC FN Rev: B418 Type: Direct-Access ANSI SCSI revision: 04 I'd followed instructions from this site (editing /etc/multipath.conf), but i failed after multipath -ll - the output was empty. Do you have any suggestions about discovering FC Connected LUNs in such configuration?

    Read the article

  • mdadm superblock hiding/shadowing partition

    - by Kjell Andreassen
    Short version: Is it safe to do mdadm --zero-superblock /dev/sdd on a disk with a partition (dev/sdd1), filesystem and data? Will the partition be mountable and the data still there? Longer version: I used to have a raid6 array but decided to dismantle it. The disks from the array are now used as non-raid disks. The superblocks were cleared: sudo mdadm --zero-superblock /dev/sdd The disks were repartitioned with fdisk and filesystems created with mfks.ext4. All disks where mounted and everything worked fine. Today, a couple of weeks later, one of the disks is failing to be recognized when trying to mount it, or rather the single partition on it. sudo mount /dev/sdd1 /mnt/tmp mount: special device /dev/sdd1 does not exist fdisk claims there to be a partition on it: sudo fdisk -l /dev/sdd Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb06f6341 Device Boot Start End Blocks Id System /dev/sdd1 1 243201 1953512001 83 Linux Of course mount is right, the device /dev/sdd1 is not there, I'm guessing udev did not create it because of the mdadm data still on it: sudo mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : b164e513:c0584be1:3cc53326:48691084 Name : pringle:0 (local to host pringle) Creation Time : Sat Jun 16 21:37:14 2012 Raid Level : raid6 Raid Devices : 6 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB) Array Size : 15628107776 (7452.06 GiB 8001.59 GB) Used Dev Size : 3907026944 (1863.02 GiB 2000.40 GB) Data Offset : 2048 sectors Super Offset : 8 sectors State : clean Device UUID : 3ccaeb5b:843531e4:87bf1224:382c16e2 Update Time : Sun Aug 12 22:20:39 2012 Checksum : 4c329db0 - correct Events : 1238535 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AA.AAA ('A' == active, '.' == missing) My mdadm --zero-superblock apparently didn't work. Can I safely try it again without losing data? If not, are there any suggestion on what do to? Not starting mdadm at all on boot might be a (somewhat unsatisfactory) solution.

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • OpenAM throwing 302 0 behind haproxy, nginx

    - by Travis
    I'm having some issues with my deployment and was wondering if you can help. My set up is as follows: 2 OpenAM servers are set up behind a load balancer (HAproxy). The load balancer is set up behind two reverse proxies (nginx). The two reverse proxies are ser up behind another load balancer (haproxy). So a request will go through Haproxy nginx Haproxy openam I can access the OpenAM web console through the reverse proxies without a problem. Everything works fine at this level. However when I access openam through the load balancer in front of the reverse proxies Openam throws a 302 error. The funny thing is however I can access the host/openam/UI/Login and login successfully. I even get the cookie and have access to my apps that are set up. However immediately after the login OpenAM throws a 302 redirect. I'm puzzled and cannot figure out what is going wrong. Does anyone have any idea? My config files are below: nginx config : server { listen 443; server_name oamlb1; location / { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; } location /openam { proxy_pass http://oamlb1.mydomain.com:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host oamlb1.mydomain.com:8080; } } haproxy config : (This file is for the servers. The file for the reverse proxies is idenical except it points to the reverse proxies) listen http_proxy :8090 mode http balance roundrobin option httpclose option forwardfor server webA oamserver1.mydomain.com:18080 option forwardfor Thanks

    Read the article

  • TFS 2010 : Unable to add Project to a collection

    - by Scott
    This morning I'm trying to setup Team Foundation Server 2010 to demo for my team. As this is just a demo, I thought I would install it on my Windows 7 machine which also serves as my development machine. My development machine uses Visual Studio 2008 Team Suite. I installed Team Explorer 2008 and then reapplied SP1. Finally I installed and setup TFS 2010. TFS by default gave me administrator privileges. I started up Visual Studios, and connected up to the Collection just fine. However, I'm unable to create a new project and get the follow error message: "TF30172: You are trying to create a team project either without required permissions or with an older version of team Explorer. Contact your project admin..." To check to permissions, I used my home computer which is running Visual Studio 2010. On this machine I was able to connect up to the same TFS instance and create a project no problem. So it looks as though it is a team explorer problem, but everywhere on the web people are saying not only am what I'm trying to do possible, but they have done it themselves. What am I missing to add a project to TFS 2010 under Visual Studio 2008?

    Read the article

  • VMware Fusion configuration files missing

    - by jdmuys
    I need to set up port forwarding to my VM in Fusion 5. Everywhere on the net, the solution is described as editing the file: /Library/Application Support/VMware Fusion/vmnet8/nat.conf However, on my install, that file doesn't exist. Neither does the vmnet8 directory. Here is the full content of VMware stuff I have in /Library/Application Support/: /Library/Application Support/ VMware/ VMware Fusion AdminWritable Shared vmInventory usbarb.rules VMware Fusion That's right: /Library/Application Support/VMware Fusion/ exists but is empty. And there is no VMware folder in other Library directories on my system. I am running OS X 10.8.2. I just reinstalled Fusion 5.02, no change. Meanwhile, I have 3 VMs that work just fine. So how am I supposed to set up port forwarding with Fusion 5? Thanks, JD Edit: in a hunch, I tried ps ax | grep natd which returned: 9646 ?? S 0:00.01 /Applications/VMware Fusion.app/Contents/Library/vmnet-natd -s 7 -m /Library/Preferences/VMware Fusion/vmnet8/nat.mac -c /Library/Preferences/VMware Fusion/vmnet8/nat.conf So it seems that the configurations files are now in the directory /Library/Preferences/VMware Fusion. I'll work from here and edit this question as I make progress.

    Read the article

  • Printer spooler spoolsv.exe crashes

    - by MattiasSN
    We have a problem with a Windows 7 print spooler. There is a Windows 2011 Small Business Server running as print server and 2 computers in the network their print spooler keeps crashing at random. The log files says it is ntdll.dll that has a fault. Naam van toepassing met fout: spoolsv.exe, versie: 6.1.7601.17514, tijdstempel: 0x4ce7b4e7 Naam van module met fout: ntdll.dll, versie: 6.1.7601.17725, tijdstempel: 0x4ec4aa8e Uitzonderingscode: 0xc0000374 Foutoffset: 0x00000000000c40f2 Id van proces met fout: 0x55c Starttijd van toepassing met fout: 0x01cd9db324904eb1 Pad naar toepassing met fout: C:\Windows\System32\spoolsv.exe Pad naar module met fout: C:\Windows\SYSTEM32\ntdll.dll Rapport-id: 8789af0b-09a6-11e2-9d78-001c25237c45 The print spooler on the server keeps running and works fine. We can also print from other computers. But on two computers the print spooler crashes. Sometimes it crashes after a user is logged in, but it also happened multiple times after a print job. After each crash we get the same ntdll.dll error. Hopefully someone can help me with this problem. If you need more information, don't hesitate to ask.

    Read the article

  • Cannot delete old NFS directory: Device or resource busy

    - by Jakobud
    On server1, we had an NFS share mounted from server 2 like this: /nfs/server2/share Recently, we took down server2 to install a new OS on it. Now we can't get NFS setup the way it was. When I do this: ls -l /nfs I get this: drwxr-xr-x 2 root root 0 2010-03-15 09:59 server2 Notice how the directory size is 0 instead of 4096 like usual? Anyways I go into server2 expecting to see a share directory, but I don't. It's empty. So therefore I cannot mount my share at /nfs/server2/share. When I try to create /nfs/server2/share directory, I get mkdir: cannot create directory `share': No such file or directory I think this is because it doesn't really think the /nfs/server2 directory really exists. Even if I use the -p option with mkdir, it doesn't work. Next I tried to remove /nfs/server2 so I could just recreate it. I try to rm -r /nfs/server2 but I get rm: cannot remove directory `/nfs/server2': Device or resource busy So now I'm at a loss. I need to mount this NFS share in the same exact place on server1 (at /nfs/server2/share) because other software on server1 depend on this. But if I can't create that share directory and I can't remove that directory, what do I do? Also, just for testing, I attempted to mount the share at /nfs/testing/share and it mounted just fine. But like I said, I need to mount it back in the same location.

    Read the article

  • Error installing dotnet framework 3.5 SP1 on windows 2008

    - by Shiraz Bhaiji
    Getting a really wierd error. One of the developers tried to install Windows 2008 as a Virtual PC. He has also run windows update. When he tries to install dotnet framework 3.5 SP1 he gets the following error: [09/25/09,12:48:26] Microsoft .NET Framework 2.0SP1 (CBS): [2] Error: Installation failed for component Microsoft .NET Framework 2.0SP1 (CBS). MSI returned error code 1 [09/25/09,12:48:34] WapUI: [2] DepCheck indicates Microsoft .NET Framework 2.0SP1 (CBS) is not installed. I though that dotnet framework was installed automatically with windows update on windows 2008. So how could it be missing? Thanks. Shiraz EDIT We also have the same problem on a VPC that had dotnet framework 3.5 installed and working OK. I have tried removing all versions of dotnet framework, using the following clean up tool: http://blogs.msdn.com/astebner/pages/8904493.aspx I then downloaded and tried to install dotnet framework 2.0 SP1, from this location: http://www.microsoft.com/Downloads/details.aspx?familyid=79BC3B77-E02C-4AD3-AACF-A7633F706BA5&displaylang=en The error I now get is: "This product is not supported on the Vista Operating System" EDIT Thanks for the help, have given an up vote to everyone. In the end our problem was that we had installed Windows Server 2008 from an older ISO image, on this everything worked fine untill we tried to install framework 3.5 SP1. We reinstalled Windows from a new image, and it worked OK.

    Read the article

  • Unable to connect to APNS with java-apns

    - by Mac
    I've got a Java program running on a firewalled server that is intended to send push notifications to my iPhone app by using java-apns. Problem is, whenever I try to send a notification the library fails to connect to the APNS server. From the stack trace, it seems that when creating the required SSL connection, the connection is being refused at some point (a java.net.ConnectException with a detail message of "connection refused" is being thrown when the library calls SSLSocketFactory's createSocket method). It would not surprise me at all if the firewall is blocking the connection, but unfortunately as I do not manage the server I am unable to verify that that is indeed the case. The fact that the program works fine from my (non-firewalled) desktop seems to support the theory. My question is, does anyone know of any method by which I can find the root cause of the problem, and/or can anyone tell me what I should tell the server admin to change to get things to work (if it is indeed the firewall that's the problem)? For reference, the server is a Linux box and I'm using version 0.1.2 of java-apns.

    Read the article

  • Slow VM on esxi 4.1

    - by user57432
    We have a FreeBSD 64bit running on a esxi 4.1, the hardware platform is a DELL R710 with 2 x 56xx (intel 6core cpu) and 48 GB ram. The FreeBSD vm is very slow, when we compiles/builds something on it, it takes 5 minuts and it says "build time 18 seconds.". There's no vmtools installed on the vm. The same vm is installaed on another R710 running esxi 4.0 for dell and there's no problems with that one. Does anyone have any idea about what to look for? the VMs on the second server (ESXi 4.1) is a clone of the VMs running on the first VMserver (ESXi 4.0 Dell edition). It's not possible for me to move the VM back to the first server since the file contaning the vm is too big. We installed the new esxi with a datasore with 8mb blocks because 1mb blocks dident allow for the file size we needed. It looks like the www server on the new ESXi 4.1 works fine, but I havent really tested it. There's not installed vmtools on any of the VMs (FreeBSD). The block size on the second VM (ESXi 4.1) datastorage is 8mb and 1mb on the first (ESXi 4.0)

    Read the article

  • Trouble with IIS SMTP relaying to Gmail

    - by saille
    I appreciate that similar questions have been asked about how to setup SMTP relaying with IIS's virtual SMTP server. However I'm still completely stumped on this problem. Here's the setup: IIS 6.0 SMTP server running on Win2k3 box with a NAT'ed IP. Company uses Gmail for all email services. An app on the box needs to send email, so normally we'd just set the app up to talk to smtp.gmail.com directly, but this app doesn't support TLS. Easy, we just setup a local SMTP relay right? So I thought. What we have done so far: Setup IIS SMTP server to relay to smtp.gmail.com, as per these excellent instructions: http://fmuntean.wordpress.com/2008/10/26/how-to-configure-iis-smtp-server-to-forward-emails-using-a-gmail-account/ The local SMTP relay allows anonymous access. Both the local IP and the loopback IP have been explicitly allowed in the Connection... and Relay... dialogs. Tried sending email from 2 different apps via the local SMTP server, but failed (the emails end up in the Queue folder, but never get sent). The IIS logs show the conversation with the local app, but zero conversation happening with smtp.gmail.com. The port used by gmail is open outbound, and indeed the apps we have that support TLS can send email directly via smtp.gmail.com, so there is no problem with the network. At this point I changed the smtp settings in IIS SMTP server to use a different external SMTP server and hey-presto, the local apps can send email via local IIS SMTP relay. So smtp.gmail.com fails to work with our IIS SMTP relay, but another 3rd party SMTP service works fine. We need to use smtp.gmail.com, so how to troubleshoot this one?

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • Web server with static IP from cable provider

    - by Dmitri
    I have a subscription to 5 static IP addresses. I want to run a web server from behind a router. My network config is as follows: Server's local address is 10.1.10.2, has IIS running on it, port 80 and 443 (IIS is not my fault, had to be done) the server's ip address is static, the subnet mask is 255.255.255.0, gateway is 10.1.10.1, which is the local address of the cable modem / router / gateway thingy. All looks to be in textbook order as far as the LAN goes. I can get to anything on my LAN from any computer on my LAN, whether they have static IP or get it through DHCP from the cable modem/router thingy. however, I have no internet access form any of my LAN computers. I called Comcast tech support and they say they can connect to my modem/router just fine and can actually use it to ping any computer on the internet or any computer on my LAN from the router/modem (i checked, myself, this is in fact the case). However, nothing on my LAN has internet connectivity. I tried pinging the DNS servers, nothing. I tried directly typing in web sites' IP addresses, nothing, so doesn't seem to be a DNS issue. Any Ideas? What malfunction of a router could be causing such weird behavior? nay ideas or educated guesses are very much appreciated.

    Read the article

  • Throttling Postfix memory

    - by teddybeard
    I have a VPS on 1and1 similar to this configuration (512MB, burst up to 2GB). I run a web service where I crawl the web and notify my users through email and sms when a certain online data feed changes. When I send the emails out, I just have PHP loop through the recipients list and send the emails out using the mail() function. Whenever I try to send a large volume of these messages out, my server starts acting funny. I can't even run an 'ls' sometimes because the shell tells me it 'cannot allocate memory'. The shell is unusable and yet my website is being served up fine. Mail.err contains: Nov 14 17:30:09 s15351477 postfix/smtp[26000]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 17:30:09 s15351477 postfix/sendmail[25999]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Nov 14 18:29:14 s15351477 postfix/smtp[9911]: fatal: inet_addr_local[getifaddrs]: getifaddrs: Cannot allocate memory Nov 14 18:29:14 s15351477 postfix/sendmail[9910]: fatal: username(1000): unable to execute /usr/sbin/postdrop -r: Success Also, if relevant, my bean counters are: Version: 2.5 uid resource held maxheld barrier limit failcnt 53907331: kmemsize 20779422 21041560 31457280 34603008 2989403 lockedpages 0 0 512 512 0 privvmpages 81488 82498 524288 576716 94640 shmpages 2831 2831 32768 32768 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 90 91 128 128 6603 physpages 32692 33531 2147483647 2147483647 0 vmguarpages 0 0 131072 2147483647 0 oomguarpages 32942 33781 9223372036854775807 2147483647 0 numtcpsock 22 23 720 720 0 numflock 27 28 376 413 0 numpty 1 1 32 32 0 numsiginfo 0 1 512 512 0 tcpsndbuf 425888 441064 3440640 5406720 0 tcprcvbuf 369200 376832 3440640 5406720 0 othersockbuf 268000 268464 2252160 4194304 0 dgramrcvbuf 0 8472 524288 576716 0 numothersock 180 182 720 720 0 dcachesize 952146 966231 5242880 5767168 0 numfile 3609 3683 8192 8192 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 25 25 200 205 0 Is there some way I can throttle postfix to keep it from swamping the system like this? Also wondering: why does email use so many resources, these emails are just short text?

    Read the article

  • Windows 7 migration led to crashdump and hibernate problems

    - by MartyMacGyver
    Note: I'm using a Samsung 830 SSD (migrated OS from defunct PC) and other than these two (interrelated?) problems it's working fine. Surprisingly well actually. Motherboard is a ASUS P8Z77-V Deluxe. Problem 1: Crashdumps are not working. volmgr throws an event 45 "The system could not sucessfully load the crash dump driver." whenever you modify crashdump settings, or if a crashdump occurs. diskpart says that "Crashdump disk = no" which is peculiar. Problem 2: Hibernation isn't working. Again, volmgr throws the same event 45 if you try to hibernate. The screen blanks, then you're at the password prompt. No sleepage occurs. (Yes, I know I should avoid hibernation on SSDs but it's enabled and the hibernation file is definitely there so I'd like to know why it's failing). Diskpart claims "Hibernation file = no" which is again peculiar... it's plainly there and getting created by the system. The common factor appears to be volmgr and/or the crashdump "service" (if that's what it is). I'd much rather get this working than spend days reinstalling and reconfiguring the entire system, especially when it's working perfectly otherwise. Sleep works as well (as long as it's not hybrid sleep). So, what defines the flags "Crashdump disk" and "Hibernation file disk" in diskpart's output? And what might be going wrong that's breaking crashdumps in particular?

    Read the article

  • HAProxy + NodeJS gets stuck on TCP Retransmission

    - by sled
    I have a HAProxy + NodeJS + Rails Setup, I use the NodeJS Server for file upload purposes. The problem I'm facing is that if I'm uploading through haproxy to nodejs and a "TCP (Fast) Retransmission" occurs because of a lost packet the TX rate on the client drops to zero for about 5-10 secs and gets flooded with TCP Retransmissions. This does not occur if I upload to NodeJS directly (TCP Retransmission happens too but it doesn't get stuck with dozens of retransmission attempts). My test setup is a simple HTML4 FORM (method POST) with a single file input field. The NodeJS Server only reads the incoming data and does nothing else. I've tested this on multiple machines, networks, browsers, always the same issue. Here's a TCP Traffic Dump from the client while uploading a file: ..... TCP 1506 [TCP segment of a reassembled PDU] >> everything is uploading fine until: TCP 1506 [TCP Fast Retransmission] [TCP segment of a reassembled PDU] TCP 66 [TCP Dup ACK 7392#1] 63265 > http [ACK] Seq=4844161 Ack=1 Win=524280 Len=0 TSval=657047088 TSecr=79373730 TCP 1506 [TCP Retransmission] [TCP segment of a reassembled PDU] >> the last message is repeated about 50 times for >>5-10 secs<< (TX drops to 0 on client, RX drops to 0 on server) TCP 1506 [TCP segment of a reassembled PDU] >> upload continues until the next TCP Fast Retransmission and the same thing happens again The haproxy.conf (haproxy v1.4.18 stable) is the following: global log 127.0.0.1 local1 debug maxconn 4096 # Total Max Connections. This is dependent on ulimit nbproc 2 defaults log global mode http option httplog option tcplog frontend http-in bind *:80 timeout client 6000 acl is_websocket path_beg /node/ use_backend node_backend if is_websocket default_backend app_backend # Rails Server (via nginx+passenger) backend app_backend option httpclose option forwardfor timeout server 30000 timeout connect 4000 server app1 127.0.0.1:3000 # node.js backend node_backend reqrep ^([^\ ]*)\ /node/(.*) \1\ /\2 option httpclose option forwardfor timeout queue 5000 timeout server 6000 timeout connect 5000 server node1 127.0.0.1:3200 weight 1 maxconn 4096 Thanks for reading! :) Simon

    Read the article

  • Install MatroskaProp on Windows 7 x64

    - by Neophytos
    To see more information in Windows Explorer property pages and menus about Matroska Video (.mkv) files, similar to what one can see when selecting native Windows media (.avi, .asf, .wmv or even just plain old mpg) files, Matroska links (from http://www.matroska.org/downloads/windows.html) to a download of the MatroskaProp shell extension (http://www.jory.info/serendipity/archives/14-MatroskaProp-2.8-Released.html). It used to work for me under Windows XP 32-bit. Now I have Windows 7 x64, and downloaded, installed and ran it. Configuration and settings page is fine. But it does not seem to actually register any shell extension. Nothing is added to Explorer windows, menus or property pages when selecting .mkv or .mks files). I tried calling the register hook manually using regsvr32.dll, that again invoked the configuration window and let me set all options, and when confirming even said the registration succeeded, but seems to have had no effect. In the registry I cannot find any traces of the shell extension being installed. Can this extension be made to work under Windows 7 or x64 systems? Are there known problems with installing this or other old shell extensions on x64, or on Windows 7?

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Remote Scripted Installation of Sun/Oracle JRE

    - by chrisbunney
    I'm attempting to automate the installation of a Debian server (debian 6.0 squeeze 64bit). Part of the installation requires the Sun JRE package to be installed. This package has a licence agreement, which has to be accepted. I have a script which uses the following lines to accept and install the JRE: echo "sun-java6-bin shared/accepted-sun-dlj-v1-1 boolean true" | debconf-set-selections apt-get install -y sun-java6-jre This works fine when executing the script locally. However, I need to execute the script remotely using the ssh command, e.g.: ssh -i keyFile root@hostname './myScript' This doesn't work. In particular, it fails on apt-get install -y sun-java6-jre. It would seem that in spite of me setting the licence agreement to accepted, when run remotely in this manner it is ignored. Despite setting the value to true, I still get prompted to manually accept the agreement when I run this command: ssh -i keyFile root@hostname 'apt-get install -y sun-java6-jre' I suspect it is something to do with environment that is taken care of when running a proper terminal session, but have no idea what to try next to fix it. So, what do I have to do to get this command (and hence my deployment script) to run correctly when executing it remotely? Or is there an alternative way that allows me to install the JRE remotely by another means? Edit 0: I have compared the output of env when executed remotely via ssh and when executed via a local terminal session. The only difference between the outputs is that the local terminal session has the additional value TERM=xterm.

    Read the article

< Previous Page | 791 792 793 794 795 796 797 798 799 800 801 802  | Next Page >