Search Results

Search found 312 results on 13 pages for 'rfc'.

Page 11/13 | < Previous Page | 7 8 9 10 11 12 13  | Next Page >

  • A regex I have working in Ruby doesn't in PHP; what could the cause be?

    - by Alex R
    I do not know ruby. I am trying to use the following regex that was generated by ruby (namely by http://www.a-k-r.org/abnf/ running on the grammar given rfc1738) in php. It is failing to match in php, but it is successfully matching in ruby. Does anyone see what differences between php's and ruby's handling of regexes that might explain this discrepancy? http:\/\/(?:(?:(?:(?:[0-9a-z]|[0-9a-z](?:[\x2d0-9a-z]?)*[0-9a-z])\x2e)?)*(?:[a-z]|[a-z](?:[\x2d0-9a-z]?)*[0-9a-z])|\d+\x2e\d+\x2e\d+\x2e\d+)(?::\d+)?(?:\/(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*(?:(?:\/(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*)?)*(?:\x3f(?:(?:[!\x24'-\x2e0-9_a-z]|%[0-9a-f][0-9a-f]|[&:;=@])?)*)?)?/i Since you all love regexes so much, how about an alternate solution. Given the ABNF in an rfc, I want a way (in php) to check if an arbitrary string is in the grammar. APG fails to compile on a 64-bit system, VTC is not Free, and I have not found any other such tools. I would also prefer not to use a regex, but it's the closest I've come to success.

    Read the article

  • Which parts of the client certificate to use when uniquely identifying users?

    - by miha
    I'm designing a system where users will be able to register and afterward authenticate with client certificates in addition to username/password authentication. The client certificates will have to be valid certificates issued by a configured list of certificate authorities and will be checked (validated) when presented. In the registration phase, I need to store part(s) of the client certificate in a user repository (DB, LDAP, whatever) so that I can map the user who authenticates with client certificate to an internal "user". One fairly obvious choice would be to use certificate fingerprint; But fingerprint itself is not enough, since collisions may occur (even though they're not probable), so we need to store additional information from the certificate. This SO question is also informative in this regard. RFC 2459 defines (4.1.2.2) that certificate serial number must be unique within a given CA. With all of this combined, I'm thinking of storing certificate serial number and certificate issuer for each registered user. Given that client certificates will be verified and valid, this should uniquely identify each client certificate. That way, even when client certificate is renewed, it would still be valid (serial number stays the same, and so does the issuer). Did I miss something?

    Read the article

  • Microsoft JScript runtime error Object doesn't support this property or method

    - by Darxval
    So i am trying to call this function in my javascript but it gives me the error of "Microsoft JScript runtime error Object doesn't support this property or method" and i cant figure out why. It is occuring when trying to call hmacObj.getHMAC. This is from the jsSHA website: http://jssha.sourceforge.net/ to use the hmac-sha1 algorithm encryption. Thank you! hmacObj = new jsSHA(signature_base_string,"HEX"); signature = hmacObj.getHMAC("hgkghk","HEX","SHA-1","HEX"); Above this i have copied the code from sha.js snippet: function jsSHA(srcString, inputFormat) { /* * Configurable variables. Defaults typically work */ jsSHA.charSize = 8; // Number of Bits Per character (8 for ASCII, 16 for Unicode) jsSHA.b64pad = ""; // base-64 pad character. "=" for strict RFC compliance jsSHA.hexCase = 0; // hex output format. 0 - lowercase; 1 - uppercase var sha1 = null; var sha224 = null; The function it is calling (inside of the jsSHA function) (snippet) this.getHMAC = function (key, inputFormat, variant, outputFormat) { var formatFunc = null; var keyToUse = null; var blockByteSize = null; var blockBitSize = null; var keyWithIPad = []; var keyWithOPad = []; var lastArrayIndex = null; var retVal = null; var keyBinLen = null; var hashBitSize = null; // Validate the output format selection switch (outputFormat) { case "HEX": formatFunc = binb2hex; break; case "B64": formatFunc = binb2b64; break; default: return "FORMAT NOT RECOGNIZED"; }

    Read the article

  • Completed Event not triggering for web service on some systems

    - by Farukh
    Hi, This is rather weird issue that I am facing with by WCF/Silverlight application. I am using a WCF to get data from a database for my Silverlight application and the completed event is not triggering for method in WCF on some systems. I have checked the called method executes properly has returns the values. I have checked via Fiddler and it clearly shows that response has the returned values as well. However the completed event is not getting triggered. Moreover in few of the systems, everything is fine and I am able to process the returned value in the completed method. Any thoughts or suggestions would be greatly appreciated. I have tried searching around the web but without any luck :( Following is the code.. Calling the method.. void RFCDeploy_Loaded(object sender, RoutedEventArgs e) { btnSelectFile.IsEnabled = true; btnUploadFile.IsEnabled = false; btnSelectFile.Click += new RoutedEventHandler(btnSelectFile_Click); btnUploadFile.Click += new RoutedEventHandler(btnUploadFile_Click); RFCChangeDataGrid.KeyDown += new KeyEventHandler(RFCChangeDataGrid_KeyDown); btnAddRFCManually.Click += new RoutedEventHandler(btnAddRFCManually_Click); ServiceReference1.DataService1Client ws = new BEVDashBoard.ServiceReference1.DataService1Client(); ws.GetRFCChangeCompleted += new EventHandler<BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs>(ws_GetRFCChangeCompleted); ws.GetRFCChangeAsync(); this.BusyIndicator1.IsBusy = true; } Completed Event.... void ws_GetRFCChangeCompleted(object sender, BEVDashBoard.ServiceReference1.GetRFCChangeCompletedEventArgs e) { PagedCollectionView view = new PagedCollectionView(e.Result); view.GroupDescriptions.Add(new PropertyGroupDescription("RFC")); RFCChangeDataGrid.ItemsSource = view; foreach (CollectionViewGroup group in view.Groups) { RFCChangeDataGrid.CollapseRowGroup(group, true); } this.BusyIndicator1.IsBusy = false; } Please note that this WCF has lots of other method as well and all of them are working fine.... I have problem with only this method... Thanks...

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • How to enforce that HTTP client uses conditional requests for updates?

    - by Day
    In a (proper RMM level 3) RESTful HTTP API, I want to enforce the fact that clients should make conditional requests when updating resources, in order to avoid the lost update problem. What would be an appropriate response to return to clients that incorrectly attempt unconditional PUT requests? I note that the (abandoned?) mod_atom returns a 405 Method Not Allowed with an Allow header set to GET, HEAD (view source) when an unconditional update is attempted. This seems slightly misleading - to me this implies that PUT is never a valid method to attempt on the resource. Perhaps the response just needs to have an entity body explaining that If-Match or If-Unmodified-Since must be used to make the PUT request conditional in which case it would be allowed? Or perhaps a 400 Bad Request with a suitable explanation in the entity body would be a better solution? But again, this doesn't feel quite right because it's using a 400 response for a violation of application specific semantics when RFC 2616 says (my emphasis): The request could not be understood by the server due to malformed syntax. But than again, I think that using 400 Bad Request for application specific semantics is becoming a widely accepted pragmatic solution (citation needed!), and I'm just being overly pedantic.

    Read the article

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • dnsmasq acts as the DHCP server for selected nodes overriding the existing DHCP server on the same LAN?

    - by user183394
    I am trying to set up a small "lab" at home. Like many modern homes, I have a regular DSL service which comes with a 2Wire 3600HGV router, which acts also as a DHCP server. Since I would like to PXE boot a few computers in my "lab" The 2Wire is inflexible to adjustments that I want to do I have used dnsmasq at work so I would like to use dnsmasq as the DHCP server for the few nodes in my "lab" if feasible. In the dnsmasq man page, there is the following: [...] -K, --dhcp-authoritative (IPv4 only) Should be set when dnsmasq is definitely the only DHCP server on a network. It changes the behaviour from strict RFC compliance so that DHCP requests on unknown leases from unknown hosts are not ignored. This allows new hosts to get a lease without a tedious timeout under all circumstances. It also allows dnsmasq to rebuild its lease database without each client needing to reacquire a lease, if the database is lost. [...] As far as I know, the ISC DHCP server can use the following to do what I would like to accomplish: authoritative; [...] subnet 192.168.1.0 netmask 255.255.255.0 { host nb0 { # only give DHCP information to this computer: hardware ethernet e8:9a:8f:17:70:42; fixed-address 192.168.1.10; option subnet-mask 255.255.255.0; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; # Non-essential DHCP options filename "/pxelinux.0"; } [...] But I much prefer dnsmasq's "all-in-one-ness". My question: do I have to couple the -K option with something else? As shown in the example above, the ISC DHCP server requires the mac addresses of managed nodes to be explicitly specified. Does dnsmasq have something similar? FYI, the machine on which I plan to run dnsmasq runs CentOS 6.3 64bit. It has a statically assigned IP address: 192.168.1.3.

    Read the article

  • Why is site serving different SSL certs to different browsers?

    - by TRiG
    The SSL certificate on menswearireland.com and on www.menswearireland.com works fine on Safari, Chrome, SeaMonkey, K-Meleon, QtWeb, Firefox, and Opera. However, Internet Explorer claims that there is an error: The security certificate presented by this website was not issued by a trusted certificate authority. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0) Another site hosted on the same managed server shows no errors: achill-fieldschool.com and www.achill-fieldschool.com work fine on IE, even though as far as I can tell the certificate is set up identically. What am I doing wrong? This is a LAMPP server running Plesk. It looks like the server is showing different certificates to different clients. To some clients it shows a RapidSSL certificate made out to www.menswearireland.com with menswearireland.com as a valid alternative name. To other clients, it shows a Parallels Panel certificate, made out to Parallels Panel. Here are results from a few different online SSL checkers: most say it's fine, while two show errors. Three online checkers say it's valid Comodo SSL Check shows it as valid DigiCert SSL Check shows it as valid SSL Shopper SSL Check shows it as valid Common name: www.menswearireland.com SANs: www.menswearireland.com, menswearireland.com Valid from October 2, 2012 to November 4, 2013 Serial Number: 559425 (0x88941) Signature Algorithm: sha1WithRSAEncryption Issuer: RapidSSL CA Another online checker seems to see a completely different certificate GeoCerts SSL Check shows it as invalid Common name: Parallels Panel Organization: Parallels Valid from August 15, 2012 to August 15, 2013 Issuer: Parallels Panel Another online checker sees more than one certificate Symantic SSL Check shows it as invalid The certificate installation checker connected to the Web server and read its certificates, but could not determine which is the primary certificate for the Web server. Incidentally, on both menswearireland.com and achill-fieldschool.com the homepage will redirect from HTTPS to HTTP. To see SSL details, visit the page /account on both (that page will redirect from HTTP to HTTPS). I’ve found more information in a more detailed online SSL checker. https://www.ssllabs.com/ssltest/analyze.html?d=menswearireland.com This site works only in browsers with SNI support My understanding is that SNI (RFC 6066) is a method for putting many SSL sites on one shared IP address and port. This does not work on Internet Explorer on older versions of Windows (this has to do with the version of Windows, not the version of Internet Explorer). However, all our SSL sites are on a unique IP address, so we shouldn’t need SNI.

    Read the article

  • Configuring varnish and django (apache/modwsgi)

    - by Hedde
    I am trying to work out why my application keeps hitting the database while I have setup varnish infront of apache. I think I am missing some vital configuration, any tips are welcome This is my curl result: HTTP/1.1 200 OK Server: Apache/2.2.16 (Debian) Content-Language: en-us Vary: Accept,Accept-Encoding,Accept-Language,Cookie Cache-Control: s-maxage=60, no-transform, max-age=60 Content-Type: application/json; charset=utf-8 Date: Sat, 15 Sep 2012 08:19:17 GMT Connection: keep-alive My varnishlog: 13 BackendClose - apache 13 BackendOpen b apache 127.0.0.1 47665 127.0.0.1 8000 13 TxRequest b GET 13 TxURL b /api/v1/events/?format=json 13 TxProtocol b HTTP/1.1 13 TxHeader b User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 13 TxHeader b Host: foobar.com 13 TxHeader b Accept: */* 13 TxHeader b X-Forwarded-For: 92.64.200.145 13 TxHeader b X-Varnish: 979305817 13 TxHeader b Accept-Encoding: gzip 13 RxProtocol b HTTP/1.1 13 RxStatus b 200 13 RxResponse b OK 13 RxHeader b Date: Sat, 15 Sep 2012 08:21:28 GMT 13 RxHeader b Server: Apache/2.2.16 (Debian) 13 RxHeader b Content-Language: en-us 13 RxHeader b Content-Encoding: gzip 13 RxHeader b Vary: Accept,Accept-Encoding,Accept-Language,Cookie 13 RxHeader b Cache-Control: s-maxage=60, no-transform, max-age=60 13 RxHeader b Content-Length: 6399 13 RxHeader b Content-Type: application/json; charset=utf-8 13 Fetch_Body b 4(length) cls 0 mklen 1 13 Length b 6399 13 BackendReuse b apache 11 SessionOpen c 92.64.200.145 53236 :80 11 ReqStart c 92.64.200.145 53236 979305817 11 RxRequest c HEAD 11 RxURL c /api/v1/events/?format=json 11 RxProtocol c HTTP/1.1 11 RxHeader c User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3 11 RxHeader c Host: foobar.com 11 RxHeader c Accept: */* 11 VCL_call c recv lookup 11 VCL_call c hash 11 Hash c /api/v1/events/?format=json 11 Hash c foobar.com 11 VCL_return c hash 11 VCL_call c miss fetch 11 Backend c 13 apache apache 11 TTL c 979305817 RFC 60 -1 -1 1347697289 0 1347697288 0 60 11 VCL_call c fetch deliver 11 ObjProtocol c HTTP/1.1 11 ObjResponse c OK 11 ObjHeader c Date: Sat, 15 Sep 2012 08:21:28 GMT 11 ObjHeader c Server: Apache/2.2.16 (Debian) 11 ObjHeader c Content-Language: en-us 11 ObjHeader c Content-Encoding: gzip 11 ObjHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 ObjHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 ObjHeader c Content-Type: application/json; charset=utf-8 11 Gzip c u F - 6399 69865 80 80 51128 11 VCL_call c deliver deliver 11 TxProtocol c HTTP/1.1 11 TxStatus c 200 11 TxResponse c OK 11 TxHeader c Server: Apache/2.2.16 (Debian) 11 TxHeader c Content-Language: en-us 11 TxHeader c Vary: Accept,Accept-Encoding,Accept-Language,Cookie 11 TxHeader c Cache-Control: s-maxage=60, no-transform, max-age=60 11 TxHeader c Content-Type: application/json; charset=utf-8 11 TxHeader c Date: Sat, 15 Sep 2012 08:21:29 GMT 11 TxHeader c Connection: keep-alive 11 Length c 0 11 ReqEnd c 979305817 1347697288.292612076 1347697289.456128597 0.000086784 1.163468122 0.000048399

    Read the article

  • FTP could not connect after applying local DNS(private DNS)

    - by Rahul
    I made a software router in CentOS linux and in that made a DNS server. I am using centOS 6..4 for making DNS i applied following steps: changed the host name = abc.zoom.com and domain name = zoom.com. then did changes in the named.rfc.1912 file as per rename named.localhost = forward and named.loopback = reverse in forward lookups i changed zone "zoom.com" IN { type master; file "forward"; allow-update { none; }; and in reverse lookups i changed zone "x.168.192.in-addr.arpa" IN { type master; file "reverse"; allow-update { none; }; and then did changes in the named.conf file options { listen-on port 53 {192.168.x.x;}; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query {any;}; recursion yes; 192.168.x.x is my local DNS address. then i copied lookups file in /var/named and edited the file "forward" $TTL 1D @ IN SOA abc.zoom.com. rahul.abc.zoom.com. ( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS abc.zoom.com. abc A 192.168.x.x and for " reverse" $TTL 1D @ IN SOA abc.zoom.com. rahul.abc.zoom.com.( 0 ; serial 1D ; refresh 1H ; retry 1W ; expire 3H ) ; minimum NS abc.zoom.com. x PTR abc.zoom.com. when i put the public ip details in the Eth0 it was automatically redirect in to the resolve.conf when i checked through dig command the answer, query all were 1. my system is itself a Software router.In gateway of my all local machine i give my system ip address. however my DNS and Gateway IP is same. Now the problem is that. i gave the static ips to all my local machines when i give the DNS which i made i.e 192.168.x.x that time my ftp is not connect in filezilla software E.g: host : pqr.zoom.com ("zoom.com" is my local domain name) username : pqr password : pqr gives an error: Error: Connection timed out Error: Could not connect to server but if i give the public DNS address it get connected. i want to solve this problem please give solution on this.

    Read the article

  • Providing reverse records for records that map to ISP IP

    - by thejartender
    I have been instructed to use my ISP ip (as a temporary fix for mapping my name server and domain records as my router dishes out rfc 1918 adresses to devices in my network where I am running an Ubuntu server, my router and my development laptop andso I have fixed: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. To a temporary version of: $TTL 3H @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 88.89.190.171 @ IN NS ns.thejarbar.org. yuccalaptop IN A 10.0.0.19 ns IN A 88.89.190.171 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. I am using bind and when using named-checkzone on this file according to my zone configurations, this file has no errors. I then run dig thejarbar.org @88.89.190.171 and get an expected authorative reply. My issue is creating my reverse DNS SOA zone and I would gratly appreciate assistance and guidance. I am stuck on how to represent the reverse records correctly for the eddresses that map to my isp IP. I am trying: $TTL 3H 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 171.190.89.88. IN PTR thejarbar.org. 171.190.89.88. IN NS ns.thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. www IN PTR www.thejarbar.org. But running named-checkzone on this file leaves an erroneous return that IN: has no NS records I would greatly appreciate assistance

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • How does an NTP host switch among the various modes?

    - by James A. Rosen
    The NTPv3 RFC describes five operating modes: Symmetric Active (1): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Symmetric Passive (2): This type of association is ordinarily created upon arrival of a message from a peer operating in the symmetric active mode and persists only as long as the peer is reachable and operating at a stratum level less than or equal to the host; otherwise, the association is dissolved. However, the association will always persist until at least one message has been sent in reply. By operating in this mode the host announces its willingness to synchronize and be synchronized by the peer. Client (3): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of its peer. By operating in this mode the host, usually a LAN workstation, announces its willingness to be synchronized by, but not to synchronize the peer. Server (4): This type of association is ordinarily created upon arrival of a client request message and exists only in order to reply to that request, after which the association is dissolved. By operating in this mode the host, usually a LAN time server, announces its willingness to synchronize, but not to be synchronized by the peer. Broadcast (5): A host operating in this mode sends periodic messages regardless of the reachability state or stratum of the peers. By operating in this mode the host, usually a LAN time server operating on a high-speed broadcast medium, announces its willingness to synchronize all of the peers, but not to be synchronized by any of them. It seems to me, though, that any host except a leaf node would probably be in several modes. For example, I might have a local area network with three NTP servers, each in Symmetric Active (1) mode with respect to one another. They would also each be clients (3) of one of the many public stratum two time servers. Lastly, they would all server as servers (4) to the many local clients. Is the point that they're only in a given mode for a moment during the synchronization? If so, how does a host know to switch? I'm only looking for enough depth here to discuss the issue in an educated manner, not to write a custom time server.

    Read the article

  • Which HTTP redirect status code is best for this REST API scenario?

    - by Aseem Kishore
    I'm working on a REST API. The key objects ("nouns") are "items", and each item has a unique ID. E.g. to get info on the item with ID foo: GET http://api.example.com/v1/item/foo New items can be created, but the client doesn't get to pick the ID. Instead, the client sends some info that represents that item. So to create a new item: POST http://api.example.com/v1/item/ hello=world&hokey=pokey With that command, the server checks if we already have an item for the info hello=world&hokey=pokey. So there are two cases here. Case 1: the item doesn't exist; it's created. This case is easy. 201 Created Location: http://api.example.com/v1/item/bar Case 2: the item already exists. Here's where I'm struggling... not sure what's the best redirect code to use. 301 Moved Permanently? 302 Found? 303 See Other? 307 Temporary Redirect? Location: http://api.example.com/v1/item/foo I've studied the Wikipedia descriptions and RFC 2616, and none of these seem to be perfect. Here are the specific characteristics I'm looking for in this case: The redirect is permanent, as the ID will never change. So for efficiency, the client can and should make all future requests to the ID endpoint directly. This suggests 301, as the other three are meant to be temporary. The redirect should use GET, even though this request is POST. This suggests 303, as all others are technically supposed to re-use the POST method. In practice, browsers will use GET for 301 and 302, but this is a REST API, not a website meant to be used by regular users in browsers. It should be broadly usable and easy to play with. Specifically, 303 is HTTP/1.1 whereas 301 and 302 are HTTP/1.0. I'm not sure how much of an issue this is. At this point, I'm leaning towards 303 just to be semantically correct (use GET, don't re-POST) and just suck it up on the "temporary" part. But I'm not sure if 302 would be better since in practice it's been the same behavior as 303, but without requiring HTTP/1.1. But if I go down that line, I wonder if 301 is even better for the same reason plus the "permanent" part. Thoughts appreciated!

    Read the article

  • IE and Content-disposition inline vs. extension-token

    - by pinkgothic
    Preamble So IE does Mime-Type sniffing. That part's old news. Suggestions of how to combat it tend to be along the lines of 'supply a content-type IE trusts' (i.e. anything that isn't text/plain or application/octet-stream) or 'add extraneous data at the start of the file that is definitely of the type you're serving'. Now, I'm working on an application that has to allow message attachments (like in e-mails), and we want to close up XSS vectors. IE's mime sniffing is one of those vectors - a text/plain file with html content will trigger as html. Recoding isn't an option at this point, changing the attachments the user has provided can only happen if there is absolutely no doubt about the maliciousness of the file - and someone might want to send HTML as text. Now, Microsoft's MSDN article implies the situation might be easier to fix than advertised: If Internet Explorer knows the Content-Type specified and there is no Content-Disposition data, Internet Explorer performs a "MIME sniff," [...] Great! Except I don't have IE nor current means to reliably install it (I realise this is a fairly sad state for a webdeveloper to be in, I hope to fix this soon) and this is grey theory that I can't quite seem to get confirmed one way or the other. Local sources say that line is hogwash - IE will mime sniff anything that is Content-Disposition: inline / <default> and not specific enough for its tastes in -Type. But what about x-* ('extension-token' in the RFC)? Trying to google for how browsers handle Content-Disposition: <extension-token> hasn't yielded anything (though I may just be doing it wrong, my understanding of Google is seriously slipping lately). I found one question that looked promising, but turned out to be a misunderstanding on side of the thread author, meaning that the train of thought was never actually addressed there. Question(s) Does IE really Mime sniff if you expressly pass Content-Disposition: inline? If so: Does anyone here know how browsers handle Content-Disposition: <extension-token>? If they do this in a way that is for my purposes benign, by presuming it to be synonymous with the default (effectively 'inline', though I hear it's not defined anywhere?), is it specific enough for IE not to Mime sniff? Or am I actually shooting myself in the foot by thinking of pursuing this avenue?

    Read the article

  • How Do I Enable My Ubuntu Server To Host Various SSL-Enabled Websites?

    - by Andy Ibanez
    Actually, I Have looked around for a few hours now, but I can't get this to work. The main problem I'm having is that only one out of two sites works. I have my website which will mostly be used for an app. It's called atajosapp.com . atajosapp.com will have three main sites: www.atajosapp.com <- Homepage for the app. auth.atajosapp.com <- Login endpoint for my API (needs SSL) api.atajosapp.com <- Main endpoint for my API (needs SSL). If you attempt to access api.atajosapp.com it works. It will throw you a 403 error and a JSON output, but that's fully intentional. If you try to access auth.atajosapp.com however, the site simply doesn't load. Chrome complains with: The webpage at https://auth.atajosapp.com/ might be temporarily down or it may have moved permanently to a new web address. Error code: ERR_TUNNEL_CONNECTION_FAILED But the website IS there. If you try to access www.atajosapp.com or any other HTTP site, it connects fine. It just doesn't like dealing with more than one HTTPS websites, it seems. The VirtualHost for api.atajosapp.com looks like this: <VirtualHost *:443> DocumentRoot /var/www/api.atajosapp.com ServerName api.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> auth.atajosapp.com Looks very similar: <VirtualHost *:443> DocumentRoot /var/www/auth.atajosapp.com ServerName auth.atajosapp.com SSLEngine on SSLCertificateFile /certificates/STAR_atajosapp_com.crt SSLCertificateKeyFile /certificates/star_atajosapp_com.key SSLCertificateChainFile /certificates/PositiveSSLCA2.crt </VirtualHost> Now I have found many websites that talk about possible solutions. At first, I was getting a message like this: _default_ VirtualHost overlap on port 443, the first has precedence But after googling for hours, I managed to solve it by editing both apache2.conf and ports.conf. This is the last thing I added to ports.conf: <IfModule mod_ssl.c> NameVirtualHost *:443 # SSL name based virtual hosts are not yet supported, therefore no # NameVirtualHost statement here NameVirtualHost *:443 Listen 443 </IfModule> Still, right now only api.atajosapp.com and www.atajosapp.com are working. I still can't access auth.atajosapp.com. When I check the error log, I see this: Init: Name-based SSL virtual hosts only work for clients with TLS server name indication support (RFC 4366) I don't know what else to do to make both sites work fine on this. I purchased a Wildcard SSL certificate from Comodo that supposedly secures *.atajosapp.com, so after hours trying and googling, I don't know what's wrong anymore. Any help will be really appreciated. EDIT: I just ran the apachectl -t -D DUMP_VHOSTS command and this is the output. Can't make much sense of it...: root@atajosapp:/# apachectl -t -D DUMP_VHOSTS apache2: Could not reliably determine the server's fully qualified domain name, using atajosapp.com for ServerName [Thu Nov 07 02:01:24 2013] [warn] NameVirtualHost *:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:443 is a NameVirtualHost default server api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost api.atajosapp.com (/etc/apache2/sites-enabled/api.atajosapp.com:1) port 443 namevhost auth.atajosapp.com (/etc/apache2/sites-enabled/auth.atajosapp.com:1) *:80 is a NameVirtualHost default server atajosapp.com (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost atajosapp.com (/etc/apache2/sites-enabled/000-default:1)

    Read the article

  • What is wrong in my DKIM setup? I'm getting all fails

    - by djechelon
    I own a domain name I have implemented SPF and DKIM to avoid my mails being junked. I have also upgraded to DMARC in monitor mode. Since I received a few failure reports recently I wanted to investigate more. I have only one server sending outbound emails, running postfix + dkimproxy. I trust that dkimproxy has no major software bugs resulting in bad messages. I have tested ReturnPath's automated DKIM test and this is the part related to DKIM/DomainKeys DKIM Results ============ Result = failed: invalid key for signature: Syntax error in tag: \"v Domain = domain.org Selector = sel DNS Record(s) = sel._domainkey.domain.org TXT "v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s" Public Key Length = 4096 DomainKeys Results ================== Domain = domain.org Selector = sel DNS Record(s) = sel._domainkey.domain.org TXT "v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s" The mail displays an anonymised DNS record with genuine public key. It reports an error in tag v. A few hours ago I noticed my v tag was v=DKIM1 instead of v=1 as specified in RFC. I thought it was an error made by me during the initial setup months ago and fixed to v=1, but anyway I received one DMARC success from Google. Let me explain better: I enforced DMARC a couple of days ago. On 4/16 morning I got a mail from Google telling me that DMARC fully passes, then since 4/17 I get all failures. Then I discovered the v=DKIM1 tag and replaced with v=1 without success I have not modified my DNS records before that. So, keeping in topic with the question, why does ReturnPath refuse my DKIM DNS record? Is something wrong in my DKIM implementation at DNS level? [Add] I have just tried port25.com's tester but at least DKIM passes ---------------------------------------------------------- DomainKeys check details: ---------------------------------------------------------- Result: permerror (DK_STAT_BADKEY: Unusable key, public if verifying, private if signing.) ID(s) verified: header.From=########### DNS record(s): sel._domainkey.domain.org. 1800 IN TXT ""v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s"" ---------------------------------------------------------- DKIM check details: ---------------------------------------------------------- Result: pass (matches From: #########) ID(s) verified: header.d=domain.org Canonicalized Headers: message-id:<[email protected]>'0D''0A' date:Thu,'20'18'20'Apr'20'2013'20'11:40:26'20'+0200'0D''0A' from:#############'0D''0A' mime-version:1.0'0D''0A' to:[email protected]'0D''0A' subject:Test'0D''0A' content-type:text/plain;'20'charset=ISO-8859-15;'20'format=flowed'0D''0A' content-transfer-encoding:7bit'0D''0A' dkim-signature:v=1;'20'a=rsa-sha1;'20'c=relaxed;'20'd=domain.org;'20'h='20'message-id:date:from:mime-version:to:subject:content-type'20':content-transfer-encoding;'20's=dom;'20'bh=uoq1oCgLlTqpdDX/iUbLy7J1Wi'20'c=;'20'b= Canonicalized Body: '0D''0A' DNS record(s): sel._domainkey.domain.org. 1800 IN TXT ""v=1; p=MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsMMLhxzXkU+tagc44oMi7eX2BsFb8BsWeT8MRL+hxi4Lsosx7tuPm90iYgilNteyJoXuSP5SUf8B2tDAifdzYQhfhctr0hX9b6ocBCukGq5p0GHpNsCPWyFvxZsCkGqLRmkfb0c36quEAWBeQLe4Z/BwXBBiW1g96WFNb2/GRI1+9OHhligdfuo4PPuU+xiwX4GB0Ik50cJL4xTdBf7lrFwoGYa03ZkXuzKxeGE4cTk50OeIs6eqrzAfbmej4nCex2qGOUt1TWI7ZvCY7u3Gxj+XKaE7VFrQACZof+NP0k2pXPHg9saGJqZrr2i6+RoxGD0w/ibjAWij9enwqlnv2ORsZfe+FmXNOLJAhlYvhHaruubDpte1c7V3ZKDceM45ZawnVmSdLCfBrMbsqipzy8NXN5MxuANYFBkx5EDT+Ieab+zqcnf08m9bgDc4RXMYppDT1/lUy6On+nyfZEnJWiH3BUtgxS8X0uXciXbsooTmPnpkzzvvKXAE/Tv3XqL90q51geqP0EmaZI6lRTpiqoX7zFGlEBiiF7/u8oheszATks8LsNZ/boTFy0OVldbYNhxlIuRmqeXkqD6+kM5ObKtMEv3AdaeBiZmvyJTP8tCsSmPt+e954RLlz2HaDjjNnZNgsj/39U2RzZsFbVqW6uyQh36/y1X4joOiPf366GkCAwEAAQ==; t=s"" Public key used for verification: sel._domainkey.domain.org (4096 bits)

    Read the article

  • How to setup stunnel so that gmail can use my own smtp server to send messages.

    - by igorhvr
    I am trying to setup gmail to send messages using my own smtp server. I am doing this by using stunnel over a non-ssl enabled server. I am able to use my own smtp client with ssl enabled just fine to my server. Unfortunately, however, gmail seems to be unable to connect to my stunnel port. Gmail seems to be simply closing the connection right after it is established - I get a "SSL socket closed on SSL_read" on my server logs. On gmail, I get a "We are having trouble authenticating with your other mail service. Please try changing your SSL settings. If you continue to experience difficulties, please contact your other email provider for further instructions." message. Any help / tips on figuring this out will be appreciated. My certificate is self-signed - could this perhaps be related to the problem I am experiencing? I pasted the entire SSL session (logs from my server) below. 2011.01.02 16:56:20 LOG7[20897:3082491584]: Service ssmtp accepted FD=0 from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp started 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=0 in non-blocking mode 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on local socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Waiting for a libwrap process 2011.01.02 16:56:20 LOG7[20897:3082267504]: Acquired libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Releasing libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Released libwrap process #0 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp permitted by libwrap from 209.85.210.171:46858 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp accepted connection from 209.85.210.171:46858 2011.01.02 16:56:20 LOG7[20897:3082267504]: FD=1 in non-blocking mode 2011.01.02 16:56:20 LOG6[20897:3082267504]: connect_blocking: connecting 127.0.0.1:25 2011.01.02 16:56:20 LOG7[20897:3082267504]: connect_blocking: s_poll_wait 127.0.0.1:25: waiting 10 seconds 2011.01.02 16:56:20 LOG5[20897:3082267504]: connect_blocking: connected 127.0.0.1:25 2011.01.02 16:56:20 LOG5[20897:3082267504]: Service ssmtp connected remote server from 127.0.0.1:3701 2011.01.02 16:56:20 LOG7[20897:3082267504]: Remote FD=1 initialized 2011.01.02 16:56:20 LOG7[20897:3082267504]: Option TCP_NODELAY set on remote socket 2011.01.02 16:56:20 LOG5[20897:3082267504]: Negotiations for smtp (server side) started 2011.01.02 16:56:20 LOG7[20897:3082267504]: RFC 2487 not detected 2011.01.02 16:56:20 LOG5[20897:3082267504]: Protocol negotiations succeeded 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): before/accept initialization 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write server hello A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write certificate request A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=2, /C=US/O=Equifax/OU=Equifax Secure Certificate Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=1, /C=US/O=Google Inc/CN=Google Internet Authority 2011.01.02 16:56:20 LOG5[20897:3082267504]: CRL: verification passed 2011.01.02 16:56:20 LOG5[20897:3082267504]: VERIFY OK: depth=0, /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client certificate A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read client key exchange A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read certificate verify A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 read finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write change cipher spec A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 write finished A 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL state (accept): SSLv3 flush data 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 items in the session cache 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects (SSL_connect()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 client renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects (SSL_accept()) 2011.01.02 16:56:20 LOG7[20897:3082267504]: 1 server connects that finished 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 server renegotiations requested 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 external session cache hits 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache misses 2011.01.02 16:56:20 LOG7[20897:3082267504]: 0 session cache timeouts 2011.01.02 16:56:20 LOG6[20897:3082267504]: SSL accepted: new session negotiated 2011.01.02 16:56:20 LOG6[20897:3082267504]: Negotiated ciphers: RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 2011.01.02 16:56:20 LOG7[20897:3082267504]: SSL socket closed on SSL_read 2011.01.02 16:56:20 LOG7[20897:3082267504]: Socket write shutdown 2011.01.02 16:56:20 LOG5[20897:3082267504]: Connection closed: 167 bytes sent to SSL, 37 bytes sent to socket 2011.01.02 16:56:20 LOG7[20897:3082267504]: Service ssmtp finished (0 left)

    Read the article

  • Bind9 Debian Not responding

    - by Marc
    Im trying to set up a webserver with Bind9, apache2 on Debian 6. I am trying to learn to do it manualy so I do not have any control panels or anything just the command line. I have a domain name lets call it www.example.com I want a virtual host setup so that I can have multiple websites with different names on my server. I have ns1.example.com and ns2.example.com registered at my servers IP (123.456.789.12). Below is my Bind9 named.conf.options options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; This is the default I'm not sure if i was supposed to edit it. I didn't. Here is my named.conf.default-zones: // prime the server with knowledge of the root servers zone "." { type hint; file "/etc/bind/db.root"; }; // be authoritative for the localhost forward and reverse zones, and for // broadcast zones as per RFC 1912 zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; zone "example.com.com" { type master; file "etc/bind/example.com.db"; }; named.conf.local Is an empty file with a comment saying to do local configuration here. example.com.db looks like this: ; BIND data file for mywebsite.com ; $ORIGIN example.com. $TTL 604800 @ IN SOA ns1.example.com. [email protected]. ( 2009120101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; IN NS ns1.example.com. IN NS ns2.example.com. IN MX 10 mail.example.com. localhost IN A 127.0.0.1 example.com. IN A 123.456.789.12 ns1 IN A 123.456.789.12 ns2 IN A 123.456.789.12 www IN A 123.456.789.12 ftp IN A 123.456.789.12 mail IN A 123.456.789.12 boards IN CNAME www These are all settings I've found from various tutorials. Now when i go to intodns I get: You should already know that your NS records at your nameservers are missing, so here it is again: ns1.example.com ns2.example.com Can someone help me? I'm not sure what Im doing wrong.

    Read the article

  • Understanding 400 Bad Request Exception

    - by imran_ku07
        Introduction:          Why I am getting this exception? What is the cause of this error. Developers are always curious to know the root cause of an exception, even though they found the solution from elsewhere. So what is the reason of this exception (400 Bad Request).The answer is security. Security is an important feature for any application. ASP.NET try to his best to give you more secure application environment as possible. One important security feature is related to URLs. Because there are various ways a hacker can try to access server resource. Therefore it is important to make your application as secure as possible. Fortunately, ASP.NET provides this security by throwing an exception of Bad Request whenever he feels. In this Article I am try to present when ASP.NET feels to throw this exception. You will also see some new ASP.NET 4 features which gives developers some control on this situation.   Description:   http.sys Restrictions:           It is interesting to note that after deploying your application on windows server that runs IIS 6 or higher, the first receptionist of HTTP request is the kernel mode HTTP driver: http.sys. Therefore for completing your request successfully you need to present your validity to http.sys and must pass the http.sys restriction.           Every http request URL must not contain any character from ASCII range of 0x00 to 0x1F, because they are not printable. These characters are invalid because these are invalid URL characters as defined in RFC 2396 of the IETF. But a question may arise that how it is possible to send unprintable character. The answer is that when you send your request from your application in binary format.           Another restriction is on the size of the request. A request containg protocal, server name, headers, query string information and individual headers sent along with the request must not exceed 16KB. Also individual header should not exceed 16KB.           Any individual path segment (the portion of the URL that does not include protocol, server name, and query string, for example, http://a/b/c?d=e,  here the b and c are individual path) must not contain more than 260 characters. Also http.sys disallows URLs that have more than 255 path segments.           If any of the above rules are not follow then you will get 400 Bad Request Exception. The reason for this restriction is due to hack attacks against web servers involve encoding the URL with different character representations.           You can change the default behavior enforced by http.sys using some Registry switches present at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters    ASP.NET Restrictions:           After passing the restrictions enforced by the kernel mode http.sys then the request is handed off to IIS and then to ASP.NET engine and then again request has to pass some restriction from ASP.NET in order to complete it successfully.           ASP.NET only allows URL path lengths to 260 characters(only paths, for example http://a/b/c/d, here path is from a to d). This means that if you have long paths containing 261 characters then you will get the Bad Request exception. This is due to NTFS file-path limit.           Another restriction is that which characters can be used in URL path portion.You can use any characters except some characters because they are called invalid characters in path. Here are some of these invalid character in the path portion of a URL, <,>,*,%,&,:,\,?. For confirming this just right click on your Solution Explorer and Add New Folder and name this File to any of the above character, you will get the message. Files or folders cannot be empty strings nor they contain only '.' or have any of the following characters.....            For checking the above situation i have created a Web Application and put Default.aspx inside A%A folder (created from windows explorer), then navigate to, http://localhost:1234/A%25A/Default.aspx, what i get response from server is the Bad Request exception. The reason is that %25 is the % character which is invalid URL path character in ASP.NET. However you can use these characters in query string.           The reason for these restrictions are due to security, for example with the help of % you can double encode the URL path portion and : is used to get some specific resource from server.   New ASP.NET 4 Features:           It is worth to discuss the new ASP.NET 4 features that provides some control in the hand of developer. Previously we are restricted to 260 characters path length and restricted to not use some of characters, means these characters cannot become the part of the URL path segment.           You can configure maxRequestPathLength and maxQueryStringLength to allow longer or shorter paths and query strings. You can also customize set of invalid character using requestPathInvalidChars, under httpruntime element. This may be the good news for someone who needs to use some above character in their application which was invalid in previous versions. You can find further detail about new ASP.NET features about URL at here           Note that the above new ASP.NET settings will not effect http.sys. This means that you have pass the restriction of http.sys before ASP.NET ever come in to the action. Note also that previous restriction of http.sys is applied on individual path and maxRequestPathLength is applied on the complete path (the portion of the URL that does not include protocol, server name, and query string). For example, if URL is http://a/b/c/d?e=f, then maxRequestPathLength will takes, a/b/c/d, into account while http.sys will take a, b, c individually.   Summary:           Hopefully this will helps you to know how some of initial security features comes in to play, but i also recommend that you should read (at least first chapter called Initial Phases of a Web Request of) Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow. This is really a nice book.

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

  • Can't get the L2TP IPSEC up and running

    - by Maciej Swic
    i have an Ubuntu 11.10 (oneiric) server running on a ReadyNAS. Im planning to use this to accept ipsec+l2tp connections through a router. However, the connection is failing somewhere half through. Using Openswan IPsec U2.6.28/K3.0.0-12-generic and trying to connect with an iOS 5 iPhone 4S. This is how far i can get: auth.log: Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "PSK" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-NAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "L2TP-PSK-noNAT" Jan 19 13:54:11 ubuntu pluto[1990]: added connection description "passthrough-for-non-l2tp" Jan 19 13:54:11 ubuntu pluto[1990]: listening for IKE messages Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying new style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: ESPINUDP(1) setup failed for new style NAT-T family IPv4 (errno=19) Jan 19 13:54:11 ubuntu pluto[1990]: NAT-Traversal: Trying old style NAT-T Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 192.168.19.99:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo 127.0.0.1:4500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface lo/lo ::1:500 Jan 19 13:54:11 ubuntu pluto[1990]: adding interface eth0/eth0 2001:470:28:81:a00:27ff:* Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/etc/ipsec.secrets" Jan 19 13:54:11 ubuntu pluto[1990]: loading secrets from "/var/lib/openswan/ipsec.secrets.inc" Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [RFC 3947] method set to=109 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Jan 19 14:04:31 ubuntu pluto[1990]: packet from 95.*.*.233:500: received Vendor ID payload [Dead Peer Detection] Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: responding to Main Mode from unknown peer 95.*.*.233 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Jan 19 14:04:31 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R1: sent MR1, expecting MI2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Jan 19 14:04:33 ubuntu pluto[1990]: "PSK"[1] 95.*.*.233 #1: STATE_MAIN_R2: sent MR2, expecting MI3 Jan 19 14:05:03 ubuntu pluto[1990]: ERROR: asynchronous network error report on eth0 (sport=500) for message to 95.*.*.233 port 500, complainant 95.*.*.233: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] Router config UDP 500, 1701 and 4500 forwarded to 192.168.19.99 (Ubuntu server for ipsec). Ipsec passthrough enabled. /etc/ipsec.conf # /etc/ipsec.conf - Openswan IPsec configuration file # This file: /usr/share/doc/openswan/ipsec.conf-sample # # Manual: ipsec.conf.5 version 2.0 # conforms to second version of ipsec.conf specification config setup nat_traversal=yes #charonstart=yes #plutostart=yes protostack=netkey conn PSK authby=secret forceencaps=yes pfs=no auto=add keyingtries=3 dpdtimeout=60 dpdaction=clear rekey=no left=192.168.19.99 leftnexthop=192.168.19.1 leftprotoport=17/1701 right=%any rightprotoport=17/%any rightsubnet=vhost:%priv,%no dpddelay=10 #dpdtimeout=10 #dpdaction=clear include /etc/ipsec.d/l2tp-psk.conf /etc/ipsec.d/l2tp-psk.conf conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT # # PreSharedSecret needs to be specified in /etc/ipsec.secrets as # YourIPAddress %any: "sharedsecret" authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=192.168.19.99 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "0" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port, but propose "0" instead of their port. rightprotoport=17/%any dpddelay=10 dpdtimeout=10 dpdaction=clear conn passthrough-for-non-l2tp type=passthrough left=192.168.19.99 leftnexthop=192.168.19.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/ipsec.secrets include /var/lib/openswan/ipsec.secrets.inc %any %any: PSK "my-key" 192.168.19.99 %any: PSK "my-key" /etc/xl2tpd/xl2tpd.conf [global] debug network = yes debug tunnel = yes ipsec saref = no listen-addr = 192.168.19.99 [lns default] ip range = 192.168.19.201-192.168.19.220 local ip = 192.168.19.99 require chap = yes refuse chap = no refuse pap = no require authentication = no ppp debug = yes pppoptfile = /etc/ppp/options.xl2tpd length bit = yes /etc/ppp/options.xl2tpd pcp-accept-local ipcp-accept-remote noccp auth crtscts idle 1800 mtu 1410 mru 1410 defaultroute debug lock proxyarp connect-delay 5000 ipcp-accept-local /etc/ppp/chap-secrets # Secrets for authentication using CHAP # client server secret IP addresses maciekish * my-secret * * maciekish my-secret * I can't seem to find the problem. Other ipsec connections to other hosts work from the network im currently at.

    Read the article

< Previous Page | 7 8 9 10 11 12 13  | Next Page >