Search Results

Search found 50475 results on 2019 pages for 'rpc over http'.

Page 709/2019 | < Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >

  • Hyper-V Virtual Machine Networking issues related to Max Ethernet Frame Size

    - by Goatmale
    I fixed an issue today earlier today but i'm interested in learning WHY it worked. We set up a new Hyper-V virtual machine only to discover that HTTP traffic wasn't working. HTTPS, pings, everything else was working fine. After months of prodding around I took a shot in the dark. On the Hyper-V host server, the physical NIC card had an advanced setting of "Max Ethernet Frame Size" set to 1500. After setting this setting to 1514 the issue was fixed. Alternatively, setting this to 1512 did not solve the issue; 1514 is the magic number. My best guess it that when this setting was set to 1500 it was allowing incoming pings because the data payload was a lot smaller of say, HTTP traffic. As far as HTTPS traffic, I read about something called "Path MTU discovery" which i'm going to assume why is HTTPs traffic was getting through fine, albeit slower. Looking at this post, people agree that 1518 is the max total frame size. Why didn't I need to change this to 1518 instead of 1514 bytes? Why is the default frame size 1500 if that's the max size of the Ethernet payload and not the max size.

    Read the article

  • NetApp NDMP backup with BE 2010 R2 works, restore fails

    - by uuwe
    Hi, I'm having some issues with a new Backup Exec 2010 R2 installation. I configured a NetApp FAS2020 as an NDMP device and want to backup files from the NAS to a tape drive connected to my backup server. I set up ndmpd according to this document (http://www.symantec.com/business/support/index?page=content&id=TECH48957) and created a separate backup user (http://filers.blogspot.com/2006/09/setting-veritas-netbackup-with-non.html). Backup works perfectly, but restoring any file gives me an authentication failed error. The NDMP device has a "global" ndmp user configured in the device tab (tried this with the newly created ndmpd backup user and the netapp root) and I can also configure separate resource credentials in the BE restore job. I have tried setting the same accounts for the "global" ndmp device and the restore credentials and have also tried setting different accounts for them. NDMP debug level is at 5 and this is what shows up in /etc/messages. The session is closed immediately after it has been granted. 16:12:07 PST [Java_Thread:info]: ndmpdserver: ndmpd.access allowed for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 16:12:07 PST [Java_Thread:info]: Ndmpd51: ndmpd session closed successfully for version = 4, sessionId = 51, from src ip = 192.168.11.17, dst ip = FAS2020-1/192.168.11.75, src port = 50857, dst port = 10000 Running wireshark on the backup server doesn't produce much. It shows a SYN - SYN/ACK - NDMP CONNECT_CLOSE Request from the backup server. The Resource Credentials for the restore job behave very oddly. If I enter NDMP credentials and do "Test All" it fails. If I use my regular domain backup account, it is successful. There are no failed or succeeded logons in the NetApp ndmp log and tracing this check shows that it doesn't even connect to the NAS. This makes me think that this is more likely flaky BE behaviour rather than misconfiguration of the NAS. Here is the options ndmp output: FAS2020-1 options ndmp ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled on ndmpd.enable on ndmpd.ignore_ctime.enabled off ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface disable ndmpd.tcpnodelay.enable off

    Read the article

  • How to detach a sql server 2008 database that is not in database list?

    - by Amir
    I installed SQL Server 2008 on Windows 7. Then I created a database. After 2 days I reinstalled Windows and SQL Server. Now I am trying to attach my database file, but I have encountered the error below. I think that the files are like an attached file and I can't attach them. What is difference between an attached file and a non-attached file? How can I attach this file? Please Help Me. Error Text: TITLE: Microsoft SQL Server Management Studio Attach database failed for Server 'AMIR-PC'. (Microsoft.SqlServer.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600.1+((KJ_RTM).100402-1540+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Attach+database+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "F:\Company.mdf". Operating system error 5: "5(Access is denied.)". (Microsoft SQL Server, Error: 5120) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.1600&EvtSrc=MSSQLServer&EvtID=5120&LinkId=20476

    Read the article

  • Why is .htaccess not allowed in a directory but is allowed in another?

    - by JD Isaacks
    I have apache2 installed on ubuntu 10.4 inside my var/www/ directory [amung others] I have a cakephp and a dvdcatalog directories. Each of which have CakePHP 1.3 installed. I can access them both via localhost/cakephp and localhost/dvdcatalog But the dvdcatalog shows up with no css styling. They both have these files: /var/www/cakephp/app/webroot/css/cake.generic.css /var/www/dvdcatalog/app/webroot/css/cake.generic.css But when I go to http://localhost/cakephp/css/cake.generic.css it sees the file but it does not see the file when I go to http://localhost/dvdcatalog/css/cake.generic.css I think this means the cakephp folder is able to use .htaccess and the dvdcatalog is not. I setup the cakephp directory last month when I was following in the blog tutorial. I am setting up the dvdcatalog directory now for a different tutorial. So I am not sure if I am missing a step. in my /etc/apache2/apache2.conf file I have this: <Directory "/var/www/*"> Order allow,deny Allow from all AllowOverride All </Directory> Which I thought gave .htaccesss to all. Does anyone have any ideas what the problem is?

    Read the article

  • GnuPG Command Line - Verifying KeePass Signature

    - by Stisfa
    I'm trying to verify the PGP Signature of the latest version of KeePass 2.14's setup file against this signature, but this is the output I receive: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --verify C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: no valid OpenPGP data found. gpg: the signature could not be verified. Please remember that the signature file (.sig or .asc) should be the first file given on the command line. C:\Program Files (x86)\GNU\GnuPG> I found this command here, but it made no mention about ".sig" or ".asc" files, so I figured I did something wrong. By reading (http://www.gnupg.org/documentation/manuals/gnupg/gpgv.html#gpgv), I further tried the following: C:\Program Files (x86)\GNU\GnuPG>gpg.exe --pgpfile C:\Users\User\Desktop\KeePass-2.14-Setup.exe gpg: Invalid option "--pgpfile" C:\Program Files (x86)\GNU\GnuPG> As you can see, the results are quite obfuscating... I took a look at this on SuperUser (http://superuser.com/questions/16160/short-easy-to-understand-explanation-of-gpg-pgp-for-nontechnical-people - I couldn't use "a href" due to the built in spam filter that discriminates against users with < 10 rep; this is the same reason for the link above this link), but none of the links seemed to really address my question, at least not directly enough for me to get any idea on how to move forward on this. Can anybody here help me with the esoteric technicality of OpenPGP & the associated use of the GnuPG program? I've felt pretty dumb learning VBS, but this is beyond humiliating: it's absolutely debilitating and maiming whatever confidence I had with my IT skills (then again, I have no justification for making any boast either, as I have yet to get my A+ Cert, lol).

    Read the article

  • Apache directive for authenticated users?

    - by Alex Leach
    Using Apache 2.2, I would like to use mod_rewrite to redirect un-authenticated users to use https, if they are on http.. Is there a directive or condition one can test for whether a user is (not) authenticated? For example, I could have set up the restricted /foo location on my server:- <Location "/foo/"> Order deny,allow # Deny everyone, until authenticated... Deny from all # Authentication mechanism AuthType Basic AuthName "Members only" # AuthBasicProvider ... # ... Other authentication stuff here. # Users must be valid. Require valid-user # Logged-in users authorised to view child URLs: Satisfy any # If not SSL, respond with HTTP-redirect RewriteCond ${HTTPS} off RewriteRule /foo/?(.*)$ https://${SERVER_NAME}/foo/$2 [R=301,L] # SSL enforcement. SSLOptions FakeBasicAuth StrictRequire SSLRequireSSL SSLRequire %{SSL_CIPHER_USEKEYSIZE} >= 128 </Location> The problem here is that every file, in every subfolder, will be encrypted. This is quite unnecessary, but I see no reason to disallow it. What I would like is the RewriteRule to only be triggered during authentication. If a user is already authorised to view a folder, then I don't want the RewriteRule to be triggered. Is this possible? EDIT: I am not using any front-end HTML here. This is only using Apache's built-in directory browsing interface and its in-built authentication mechanisms. My <Directory> config is: <Directory ~ "/foo/"> Order allow,deny Allow from all AllowOverride None Options +Indexes +FollowSymLinks +Includes +MultiViews IndexOptions +FancyIndexing IndexOptions +XHTML IndexOptions NameWidth=* IndexOptions +TrackModified IndexOptions +SuppressHTMLPreamble IndexOptions +FoldersFirst IndexOptions +IgnoreCase IndexOptions Type=text/html </Directory>

    Read the article

  • Apache times out after 2 minutes, when Apache TimeOut is 120

    - by Robert Gowland
    We have a set up where the browser makes an http request to Box A which in turn makes an http request to Box B. What we're encountering is that Box A waits for Box B to respond for two minutes then the users sees: The page cannot be displayed Explanation: There is a problem with the page you are trying to reach and it cannot be displayed. Try the following: * Refresh page: Search for the page again by clicking the Refresh button. The timeout may have occurred due to Internet congestion. * Check spelling: Check that you typed the Web page address correctly. The address may have been mistyped. * Access from a link: If there is a link to the page you are looking for, try accessing the page from that link. Technical Information (for support personnel) * Error Code: 404 Not Found. The requested item could not be located. (12028) Watching the logs on Box B we see that it takes 5 minutes to do the work requested. The problem is that the apache time outs on both boxes are set 1200 (20m), not 120 (2m). Any ideas where to look?

    Read the article

  • Nginx proxy domain to another domain with no change URL

    - by Evgeniy
    My question is in the subj. I have a one domain, that's nginx's config of it: server { listen 80; server_name connect3.domain.ru www.connect3.domain.ru; access_log /var/log/nginx/connect3.domain.ru.access.log; error_log /var/log/nginx/connect3.domain.ru.error.log; root /home/httpd/vhosts/html; index index.html index.htm index.php; location ~* \.(avi|bin|bmp|css|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|js|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|pps|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /home/httpd/vhosts/html; access_log off; expires 1d; } location ~ /\.(git|ht|svn) { deny all; } location / { #rewrite ^ http://connect2.domain.ru/; proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_hide_header "Cache-Control"; add_header Cache-Control "no-store, no-cache, must-revalidate, post-check=0, pre-check=0"; proxy_hide_header "Pragma"; add_header Pragma "no-cache"; expires -1; add_header Last-Modified $sent_http_Expires; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } I need to proxy connect3.domain.ru host to connect2.domain.ru, but with no URL changed in browser's address bars. My commented out rewrite line could solve this problem, but it's just a rewrite, so I cannot stay with the same URL. I know that this question is easy, but please help. Thank you.

    Read the article

  • Apache httpd LDAP integration

    - by David W.
    I am configuring a CollabNet Subversion integration. I have the following collabnet_subversion.conf file: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=vegibanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" </Location> This works great. Any user in our Active Directory can access our Subversion repository. Now, I want to limit this to only people in the Active Directory group Development: <Location /svn> DAV svn SVNParentPath /mnt/svn/new_repos SVNListParentPath on AuthName "VegiBanc Source Repository" AuthType basic AuthzLDAPAuthoritative off AuthBasicProvider ldap AuthLDAPURL ldap://ldap.vegibanc.com/dc=vegibanc,dc=com?sAMAccountName" NONE AuthLDAPBindDN "CN=SVN-Admin,OU=Service Accounts,OU=VegiBanc Users,OU=VegiBanc,DC=vegibanc,DC=com" AuthLDAPBindPassword "swordfish" Require ldap-group CN=Development OU=Security Groups OU=VegiBanc, dc=vegibanc, dc=com </Location> I added Require ldap-group, but now no one can log in. I have LogLevel set to debug, but all I get is this in my error_log (Single line broken up for easier reading): [Thu Oct 11 13:09:28 2012] [info] [client 10.55.9.45] [6752] vauth_ldap authenticate: user dweintraub authentication failed; URI /svn/ [ldap_search_ext_s() for user failed][Bad search filter] And, I get this in my access_log: 10.55.9.45 - - [11/Oct/2012:13:09:27 -0500] "GET /svn/ HTTP/1.1" 401 401 10.55.9.45 - dweintraub [11/Oct/2012:13:09:28 -0500] "GET /svn/ HTTP/1.1" 500 535 Yes, I am in that group. (Or, at least how can I confirm that just to make sure that's not the issue. I have the SysinternalsSuite ADExplorer. It's where I'm getting all of my info.)

    Read the article

  • My email server is being blocked by Yahoo: TS03 Message permanently deferred.

    - by bilygates
    Hello, My mail server has been getting the following error from Yahoo's mail servers since about a month: postfix/smtp[23791]: host g.mx.mail.yahoo.com[98.137.54.238] refused to talk to me: 421 4.7.1 [TS03] All messages from [my ip] will be permanently deferred; Retrying will NOT succeed. See http:// postmaster.yahoo.com/421-ts03.html I have exchanged about 4 emails with Yahoo's support team. The first three seemed like automated messages, and the 4th told me that there is nothing they can do, but if I change my policies I can send them another email in 6 months. They also told me: However, based on the information you have provided us, we cannot systematically deliver your email to the Inbox at this time. We suggest that you ask your users to set up a filter in Yahoo! Mail to ensure that they get your email messages in their Inbox. The problem is that my email doesn't even get to their Spam folder. The server won't allow any connections. I have never sent spam messages, not even newsletters. I only send emails for my new users so they can activate their account. I've also implemented DKIM and told Yahoo about this. I have checked my configuration with http://www.myiptest.com/staticpages/index.php/DomainKeys-DKIM-SPF-Validator-test and it reports that both SPF and DKIM are set up correctly. What should I do? Basically, I'm losing new users every day. Any help will be appreciated. P.S.: I apologize if this particular question has already been asked. I searched for it but didn't find it.

    Read the article

  • mod_deflate doesn't work [closed]

    - by kikio
    I want to gzip my static files. so put this in .htaccess: <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </IfModule> and looked for mod_deflate in phpinfo() output Loaded Modules section, and I found it. But when I track server responses with Firebug, no gzipped file can be found: HTTP/1.1 200 OK Date: Sat, 08 Sep 2012 21:41:21 GMT Last-Modified: Sat, 08 Sep 2012 21:26:04 GMT Accept-Ranges: bytes Cache-Control: max-age=604800 Expires: Sat, 15 Sep 2012 21:41:21 GMT Vary: Accept-Encoding Keep-Alive: timeout=3, max=50 Connection: Keep-Alive Content-Type: text/css Content-Length: 18206 What's the problem? I'm sure I have mod_deflate enabled (according to php apache_get_modules()). UPDATE: the request headers: GET /d/jquery-ui.css HTTP/1.1 Host: 127.0.0.1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip, deflate DNT: 1 Connection: keep-alive Pragma: no-cache Cache-Control: no-cache

    Read the article

  • How to get my W2003-server (back) into the web (after setting up bridged networking)

    - by MBaas
    I have recently set up Virtualbox on a W2003-Server (which is also used as webserver, accessed from the web). My vbox worked nicely, but then I wanted more, I wanted to have the vm appear in the intranet like any ordinary pc. I was advised to setup bridged networking as opposed to NAT. I did so, and in the server's network connections have bridged the LAN-Connection and the "VirtualBox Host-Only Network" (yes, it says "host only network", but I assure that VBox networking is configured to use network bridge). So now my VM is visible in the intranet and it also has www-accesss, the server can also access the web. The only problem that came up is that the server is no longer accessible from the web. I've traced an HTTP-Request and it says "Can't connect to *:80 (connect: No route to host)". So maybe something in the router's config needs to be adjusted (yeah, well, the server's IP-Address changed from 192.168.1.199 to ...198). So I went into the router-config, reviewed port-forwarding for port 80 and adjusted the IP there, but it still didn't work. Unsure if it was a router-problem or rather something in the server's config, I've setup a "demilitarized zone" in the router and have put the server into it. (My understanding is that this would put the server straight into the web...) But the result of the HTTP-Requests is still the same :(

    Read the article

  • IIS 7.5 error 500 in fastcgi module after upgrading wordpress to 3.0.2

    - by Maniac13
    I am running multiple wordpress blogs on the following setup: Server 2008 R2; IIS 7.5; PHP 5.3.3; MySQL 5.0.7; I upgraded my wordpress install from 2.9.2 to 3.0.2 (on 2 different sites) today and the upgrade went fine. I can serve .php pages without errors, log into the admin system etc. I can browse my blog by going directly to mywebsite.com/index.php, but when I try to go to mywebsite.com (without the index.php) I get he below 500 error. I reset IIS, removed and re-attached the default document, but I am running out of ideas. Please if anyone has a solution for this that would be great. This is the 500 error I am getting: Error Summary HTTP Error 500.0 - Internal Server Error The page cannot be displayed because an internal server error has occurred. Detailed Error Information Module FastCgiModule Notification ExecuteRequestHandler Handler PHP FastCGI Error Code 0x00000000 Requested URL http://mywebsite.com:80/index.php Physical Path D:\mywebsite.com\index.php Logon Method Anonymous Logon User Anonymous Thanks Stephan

    Read the article

  • Can't Install msysgit/tortoisegit

    - by Jay
    I ran msysGit-netinstall-1.7.0.2-preview20100407-2.exe.   (http://code.google.com/p/msysgit/downloads/list) Then I ran TortoiseGit-1.4.4.0-64bit.msi.   (http://code.google.com/p/tortoisegit/downloads/list) msysgit was installed in C:\ TortioseGit appears to have been installed in C:\Program Files\TortoiseGit I have: "Git Clone..." "Git Create repository here" "TortoiseGit" in Explorer context menu. When I try to clone, I get "git have not installed" [sic]. I have tried setting the MSysGit path, in the TortioseGit settings, to everything imaginable. Nothing works. Neither C:\Program Files or C:\Program Files (x86) have a Git folder. The git command gives "command not found" from both cmd.exe and bash (that msysgit installed) I don't not see msysgit in - Control Panel - Programs - Program Features, but I do see TortioseGit in there. I would like a procedure for verifying that msysgit is properly installed. A procedure for uninstalling msysgit would be an added bonus. I would like a procedure for getting TortoiseGit to work. I am running Windows 7 on a MacBook Pro.

    Read the article

  • Using secure proxies with Google Chrome

    - by cYrus
    Whenever I use a secure proxy with Google Chrome I get ERR_PROXY_CERTIFICATE_INVALID, I tried a lot of different scenarios and versions. The certificate I'm using a self-signed certificate: openssl genrsa -out key.pem 1024 openssl req -new -key key.pem -out request.pem openssl x509 -req -days 30 -in request.pem -signkey key.pem -out certificate.pem Note: this certificate works (with a warning since it's self-signed) when I try to setup a simple HTTPS server. The proxy Then I start a secure proxy on localhost:8080. There are a several ways to accomplish this, I tried: a custom Node.js script; stunnel; node-spdyproxy (OK, this involves SPDY too, but later... the problem is the same); [...] The browser Then I run Google Chrome with: google-chrome --proxy-server=https://localhost:8080 http://superuser.com to load, say, http://superuser.com. The issue All I get is: Error 136 (net::ERR_PROXY_CERTIFICATE_INVALID): Unknown error. in the window, and something like: [13633:13639:1017/182333:ERROR:cert_verify_proc_nss.cc(790)] CERT_PKIXVerifyCert for localhost failed err=-8179 in the console. Note: this is not the big red warning that complains about insecure certificates. Now, I have to admit that I'm quite n00b for what concerns certificates and such, if I'm missing some fundamental points, please let me know.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • Flushing iptables broke my pipe, how can I save my instance?

    - by Niels
    I was setting up my iptables when I performed a iptables -F and my ssh pipe broke. This is the last output of my session: root@alfapaints:~# iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp dpt:2222 ACCEPT tcp -- li465-68.members.linode.com anywhere state NEW,ESTABLISHED tcp dpt:nrpe ACCEPT tcp -- anywhere anywhere tcp dpt:9200 state NEW,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http state NEW,ESTABLISHED ACCEPT udp -- anywhere anywhere udp spt:domain Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:2222 ACCEPT tcp -- anywhere anywhere state ESTABLISHED tcp spt:nrpe ACCEPT tcp -- anywhere anywhere tcp spt:9200 state ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp spt:http state ESTABLISHED ACCEPT udp -- anywhere anywhere udp dpt:domain root@alfapaints:~# iptables -F Write failed: Broken pipe I tested my connection just before and I was able to connect with ssh. Now I did a nmap scan and not a single port is open anymore. I know my VPS is running on VMWare ESXi, could a reboot help? Or if not could I attach and mount the disk to another vm to save the data? Does anybody have some advise? And maybe an explanation what happend or what could have cause my pipe to break? ps: I didn't save my rules on the config directories of iptables. But used a file I stored in ~/rules.config to apply my rules like this: iptables-restore < rules.config So probably a reboot would help? Thanks a lot in advance.

    Read the article

  • Windows XP laptop doesn't appear in WSUS All computers list

    - by George
    I have this one laptop that doesn't appear in WSUS all computers list. We have about 23-25 PCs/laptops/servers in the network, all, but one are listed in WSUS. This is what I have done so far: 1) Changing Updates on local PC: Go to your Windows XP client and start a new Microsoft Management Console (MMC). At Start, Run, type MMC. Use Ctrl+M to add a new snap-in. Click Add, and then add the Group Policy Object Editor for the Local Computer. Click Close, and then click OK. Expand the Local Computer Policy. Under Computer Configuration, go to Administrative Templates, Windows Components, Windows Update. In the right-hand pane, double-click Specify intranet Microsoft update service location. Configure the settings to reflect my WSUS server. Click OK and then close the MMC without saving it. executed wuauclt.exe /detectnow 2) Edited registry key to be pushed to the PCs using GPO [HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate] "WUServer"=http://wsusserver "TargetGroupEnabled"=dword:00000001 "TargetGroup"="WINXP" "WUStatusServer"=http://wsuswerver 3) executed wuauclt /resetauthorization /detectnow 4)Synchronised and refreshed the group I am running out of ideas here. The client is running Windows XP pro, WSUS version is 3.0 and is running on Windows 2008 R2 64-bit. Please, help! Thanks! EDIT 13.IX.2012 @ 15.40 I should have also mentioned that we do have a Windows Update GPO for workstations group and that laptop is a part of that group.

    Read the article

  • litespeed issue with content-type

    - by sandeep.s85
    I am running magento with litespeed. The problem I am facing is that ajax call is being made of which header is set as x-json, but lightspeed is setting another header of text/html content type I've checked that page with apache and everything is working fine. I checked the response headers with apache and litespeed and here are they: With apache: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 05:58:47 GMT Server: Apache Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 06:58:48 GMT; path=/; domain=mydomain.com; httponly Connection: close Transfer-Encoding: chunked Content-Type: application/x-json With litespeed: HTTP/1.1 200 OK Date: Fri, 07 Sep 2012 06:10:55 GMT Server: LiteSpeed Connection: close Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: frontend=164b21c64808a05e806027bdbd4d745d; expires=Fri, 07-Sep-2012 07:10:55 GMT; path=/; domain=mydomain.com; httponly Content-Type: text/html; charset=UTF-8 Content-Length: 474 Vary: User-Agent I've also added application/json to mime.properties of litespeed,restarted it but that did not work. Here is the screenshot

    Read the article

  • GMail and Yahoo Mail servers not accepting mails from my slicehost slice

    - by Lakshmanan
    Hi, I have a rails in one of the slices at Slicehost. I've setup postfix (sendmail) to send emails from my rails app. All emails to Google Apps domain (to company setup google hosted paid email id) are getting delivered properly (but to spam folder). But all emails to [email protected], [email protected], .. @hotmail.com are not getting delivered and this is the line from my /var/log/mail.log Dec 21 17:33:56 staging postfix/smtp[32295]: 5EB4810545B: to=<[email protected]>, relay=j.mx.mail.yahoo.com[66.94.237.64]:25, delay=1.6, delays=0.02/0.01/1.5/0, dsn=4.0.0, status=deferred (host j.mx.mail.yahoo.com[66.94.237.64] refused to talk to me: 553 Mail from 173.203.201.186 not allowed - 5.7.1 [BL21] Connections not accepted from IP addresses on Spamhaus PBL; see http://postmaster.yahoo.com/errors/550-bl21.html [550]) and this is what i got for gmail Dec 21 17:29:17 staging postfix/smtp[32216]: 0FA3310545B: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[74.125.65.27]:25, delay=0.59, delays=0.02/0.01/0.09/0.47, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[74.125.65.27] said: 550-5.7.1 [173.203.201.186] The IP you're using to send mail is not authorized 550-5.7.1 to send email directly to our servers. Please use the SMTP relay at 550-5.7.1 your service provider instead. Learn more at 550 5.7.1 http://mail.google.com/support/bin/answer.py?answer=10336 v49si11176750yhc.16 (in reply to end of DATA command)) Please help. I have very little knowledge about setting dns, servers and stuff.

    Read the article

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • apache2: ssl_error_rx_record_too_long when visiting port 80?

    - by John
    Hi, I have an Ubuntu 10 x64 server edition machine. I got a second IP and configured /etc/network/interfaces like so (actual IPs and gateways removed): auto lo iface lo inet loopback #iface eth0 inet dhcp auto eth0 auto eth0:0 iface eth0 inet static address [ my first IP ] netmask 255.255.255.0 gateway [ my first gateway ] iface eth0:0 inet static address [ my second IP ] netmask 255.255.255.0 gateway [ my second gateway ] /etc/apache2/ports.conf: Listen 80 NameVirtualHost [ my first IP ]:80 NameVirtualHost [ my second IP ]:80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 NameVirtualHost [ my first IP - some site is running SSL successfully using it ]:443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/sites-enabled/mysite.conf: <VirtualHost [my second IP ]:80> ServerName mysite.com Include /var/www/mysite.com/djangoproject/apache/django.conf </VirtualHost> Then when visiting http[mysite].com:80 or http[mysite].com (:// removed because serverfault doesn't allow me to post hyperlinks), I get: An error occurred during a connection to [mysite].com. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) My guess is that the configuration file is not being picked up, and apache is therefore looking for the default-ssl file, which is not in conf-enabled. If I were to configure that file properly, it seems I would successfully connect to whatever default directory is specified in the default-ssl file. But I want to connect to my website. Any ideas? Thanks in advance!

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • location of index.html CentOS 6

    - by user2118559
    Based on this http://www.servermom.com/how-to-add-new-site-into-your-apache-based-centos-server/454/ tutorial installed Apache-based CentOS Server I use putty.exe as editor vi /etc/httpd/conf/httpd.conf at very bottom modified to <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/fikitipis.com/public_html ServerName www.fikitipis.com ServerAlias fikitipis.com ErrorLog /var/www/fikitipis.com/error.log CustomLog /var/www/fikitipis.com/requests.log common </VirtualHost> So expect that index is at /var/www/fikitipis.com/public_html When in browser type ip address of server, see Apache 2 Test Page powered by CentOS and so on You may now add content to the directory /var/www/html/ Then [root@vps ~]# ls /var/www/ see cgi-bin domain.com error fikitipis.com html icons Checking content of directories ls /var/www/domain.com/public_html, ls /var/www/fikitipis.com/public_html, /var/www/html/ are empty Where is index.html? Did touch /var/www/fikitipis.com/public_html/index1.html then vi /var/www/fikitipis.com/public_html/index1.html, typed a, then wrote some text in file, then Escape and shift+zz. And in browser http://111.111.11.111/index1.html and see what I had wrote. So until now seems that all works

    Read the article

< Previous Page | 705 706 707 708 709 710 711 712 713 714 715 716  | Next Page >