Search Results

Search found 10675 results on 427 pages for 'dynamic proxy'.

Page 269/427 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • Multiple requests to server question

    - by embedded
    I have a DB with user accounts information. I've scheduled a CRON job which updates the DB with every new user data it fetches from their accounts. I was thinking that this may cause a problem since all requests are coming from the same IP address and the server may block requests from that IP address. Is this the case? If so, how do I avoid being banned? should I be using a proxy? Thanks

    Read the article

  • When should I use OperationContextScope inside of a WCF service?

    - by blinton
    I'm currently working on a WCF service that reaches out to another service to submit information in a few of its operations. The proxy for the second service is generated through the strongly typed ProxyFactory<T> class. I haven't experienced any issues but have heard I should do something like the following when making the call: using (new OperationContextScope((IContextChannel)_service)) _service.Send(message); So my question is: when is creating this new OperationContextScope appropriate, and why? Thanks!

    Read the article

  • Will AJAX cause my site to have a high bounce % and hurt my search ratings?

    - by Cryo
    I'm building an art gallery website that updates its images via AJAX, for those who have javascript enabled, rather than request multiple page loads. I assume this will appear as though my site has a high bounce percentage. I understand that search engines will not be able to index dynamic content, but will such a misinterpreted bounce rate hurt my search engine ratings, even if I have many return visitors?

    Read the article

  • GeoIP and Nginx

    - by JavierMartinez
    I have a nginx with geoip, but it is not working rightly. The issue is the next: Nginx are getting geodata from $_SERVER['REMOTE_ADDR'] instead of $_SERVER['HTTP_X_HAPROXY_IP'], which have the real client ip. So, the reported geodata belongs to my server ip instead of client ip. Does anybody where could be the error to fix it? Nginx version and compiled modules: nginx -V nginx version: nginx/1.2.3 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log- path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-file-aio --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-auth-pam --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-echo --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-upstream-fair --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-dav-ext-module --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-syslog --add-module=/usr/src/nginx/source/nginx-1.2.3/debian/modules/nginx-cache-purge nginx site conf (frontend machine) server { root /var/www/storage; server_name ~^.*(\.)?mydomain.com$; if ($host ~ ^(.*)\.mydomain\.com$) { set $new_host $1.mydomain.com; } if ($host !~ ^(.*)\.mydomain\.com$) { set $new_host www.mydomain.com; } add_header Staging true; real_ip_header X-HAProxy-IP; set_real_ip_from 10.5.0.10/32; location /files { expires 30d; if ($uri !~ ^/files/([a-fA-F0-9]+)_(220|45)\.jpg$) { return 403; } rewrite ^/files/([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9][a-fA-F0-9])([a-fA-F0-9]+)_(220|45)\.jpg$ /files/$1/$2/$3/$4/$1$2$3$4$5_$6.jpg break; try_files $uri @to_backend; } location /assets { if ($uri ~ ^/assets/r([a-zA-Z0-9]+[^/])(/(css|js|fonts)/.*)) { rewrite ^/assets/r([a-zA-Z0-9]+[^/])/(css|js|fonts)/(.*)$ /assets/$2/$3 break; } try_files $uri @to_backend; } location / { proxy_set_header Host $new_host; proxy_set_header X-HAProxy-IP $remote_addr; proxy_pass http://10.5.0.10:8080; } location @to_backend { proxy_set_header Host $new_host; proxy_pass http://10.5.0.10:8080; } } nginx.conf (backend machine) http{ ... ## # GeoIP Config ## geoip_country /etc/nginx/geoip/GeoIP.dat; # the country IP database geoip_city /etc/nginx/geoip/GeoLiteCity.dat; # the city IP database ... } fastcgi_params (backend machine) ### SET GEOIP Variables ### fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_CODE3 $geoip_country_code3; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; fastcgi_param GEOIP_CITY_COUNTRY_CODE $geoip_city_country_code; fastcgi_param GEOIP_CITY_COUNTRY_CODE3 $geoip_city_country_code3; fastcgi_param GEOIP_CITY_COUNTRY_NAME $geoip_city_country_name; fastcgi_param GEOIP_REGION $geoip_region; fastcgi_param GEOIP_CITY $geoip_city; fastcgi_param GEOIP_POSTAL_CODE $geoip_postal_code; fastcgi_param GEOIP_CITY_CONTINENT_CODE $geoip_city_continent_code; fastcgi_param GEOIP_LATITUDE $geoip_latitude; fastcgi_param GEOIP_LONGITUDE $geoip_longitude; haproxy.conf (frontend machine) defaults log global option forwardfor option httpclose mode http retries 3 option redispatch maxconn 4096 contimeout 100000 clitimeout 100000 srvtimeout 100000 listen cluster_webs *:8080 mode http option tcpka option httpchk option httpclose option forwardfor balance roundrobin server backend-stage 10.5.0.11:80 weight 1 $_SERVER dump: http://paste.laravel.com/7dy Where 10.5.0.10 is frontend private ip and 10.5.0.11 backend private ip

    Read the article

  • Can't get network bridging to work

    - by Antonis Christofides
    I'm trying to make network bridging to work on a Debian squeeze (I'm experimenting in order to make a QEMU/KVM virtual machine that will be visible to the outside network as if it were a distinct machine). The problem is that when I type brctl addif br0 eth0 then I lose connectivity to the network until I type brctl delif br0 eth0. More specifically, here's how my machine looks like before I do anything (essentially eth0 is listening on 147.102.160.153): root@laura:/home/anthony# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 8c:73:6e:db:1c:1b brd ff:ff:ff:ff:ff:ff inet 147.102.160.153/24 brd 147.102.160.255 scope global eth0 inet6 2001:648:2000:a0:8e73:6eff:fedb:1c1b/64 scope global dynamic valid_lft 2591848sec preferred_lft 604648sec inet6 fe80::8e73:6eff:fedb:1c1b/64 scope link valid_lft forever preferred_lft forever 3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc mq state DOWN qlen 1000 link/ether 4c:ed:de:8e:44:d7 brd ff:ff:ff:ff:ff:ff 4: vboxnet0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff 5: pan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether ee:7c:88:59:d0:e8 brd ff:ff:ff:ff:ff:ff Now let me add the bridge: root@laura:/home/anthony# brctl addbr br0 root@laura:/home/anthony# ip tuntap add dev tap0 mode tap root@laura:/home/anthony# ip link set tap0 up root@laura:/home/anthony# brctl addif br0 tap0 Until here everything continues to work normally. Finally, I try to add eth0 to the bridge: root@laura:/home/anthony# brctl addif br0 eth0 At this point, I no longer have a network connection. If I try to ping something, it tells "Destination Host Unreachable". The output of ip addr show seems normal: root@laura:/home/anthony# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 8c:73:6e:db:1c:1b brd ff:ff:ff:ff:ff:ff inet 147.102.160.153/24 brd 147.102.160.255 scope global eth0 inet6 2001:648:2000:a0:8e73:6eff:fedb:1c1b/64 scope global dynamic valid_lft 2591908sec preferred_lft 604708sec inet6 fe80::8e73:6eff:fedb:1c1b/64 scope link valid_lft forever preferred_lft forever [snip wlan0, vboxnet0 and pan0, which are down and irrelevant] 8: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 16:30:f2:67:ab:75 brd ff:ff:ff:ff:ff:ff 9: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether 16:30:f2:67:ab:75 brd ff:ff:ff:ff:ff:ff inet6 fe80::1430:f2ff:fe67:ab75/64 scope link valid_lft forever preferred_lft forever Also: root@laura:/home/anthony# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 147.102.160.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0.0.0.0 147.102.160.200 0.0.0.0 UG 0 0 0 eth0 I can't understand what I'm doing wrong. I want the machine to continue to listen on 147.102.160.153 on eth0, and in addition to that I want to have a tap0 interface, bridged to eth0, that will be available to the guest machine so that the latter listens on another ip address (say 147.102.160.205). (If there's another way to achieve what I want, I'm also interested.)

    Read the article

  • DHCP and DNS services configuration for VOIP system, windows domain, etc

    - by Stemen
    My company has numerous physical offices (for purposes of this discussion, 15 buildings). Some of them are well-connected to our primary data center via fiber. Others will be connected to the data center by P2P T1. We are in the beginning stages of implementing an Avaya VOIP telephone system, and we will be replacing a significant portion of our network infrastructure in the process. In tandem with the phone system implementation, we are going to be re-addressing some of our networks, and consolidating most of our Windows domains into one (not all domains, just most). We currently have quite a few Windows domains, and they of course each have their own DNS zones. A few of those networks currently use DHCP, but the majority use static IP assignments for every device. I'm tired of managing static assignments -- I want to use DHCP configuration on everything except servers. Printers and etc will have DHCP reservations. The new IP phones will need to get IP addresses from DHCP, though they need to be in a separate VLAN from the computers/printers/etc. The computers and printers need to be registered in DNS. That's currently handled by the Windows DHCP servers on each of the respective domains. We need to place a priority on DHCP and DNS being available on a per-site basis (in case something were to interrupt the WAN connection) for computers and (primarily) phones. Smaller locations (which will have IP phones but not be a member of any Windows domain) will not have any Windows DNS/DHCP server(s) available. We also are looking for the easiest way to replace a part if it were to fail. That is to say, if a server/appliance/router hosting DHCP were to crash hard, and we couldn't extremely quickly recover the DHCP reservations and leases (and subsequently restore them onto a cold spare), we anticipate that bad things could happen. What is the best idea for how to re-implement DNS and DHCP keeping all of the above in mind? Some thoughts that have been raised (by myself or my coworkers): Use Windows DNS and DHCP servers, where they exist, and use IP helpers to route DHCP requests to some other Windows server if necessary. May not be acceptable if the WAN goes down and clients don't get a DHCP response. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by Cisco routers. Every site would be covered for DHCP, but from what I've read, Cisco routers can't handle dynamic registration of DHCP clients to Windows DNS servers, which might create a problem where Cisco routers are used for DHCP. Use Windows DNS (everywhere, over WAN in some cases) and a mix of Windows DHCP and DHCP provided by some service running on an extremely low-price linux server. Is there any such software that would allow DHCP leases granted by these linux boxes to be dynamically registered on the Windows DNS servers? Come up with a Linux solution for both DNS and DHCP, and deploy low-price linux servers to every site. Requirements would be that the DNS zone be multi-master (like Windows DNS integrated with Active Directory), that DHCP be able to make dynamic DNS registrations in that zone, for every lease (where a hostname is provided and is thus possible), and that multiple servers be either authoritative for the same DHCP scope or at least receiving a real-time copy / replication / sync of the leases table so that if one server dies, we still know which MAC has what address. Purchase dedicated DNS/DHCP appliances, deploying to all sites. From what I read/see, this solves all of our technical problems. Then come the financial problems... I don't have a ton of money to spend on this. Or, some other solution that we've thus far overlooked and will consider upon recommendation. Can Cisco routers or Windows servers sync DHCP lease tables so that multiple servers can be authoritative (or active/passive for all I care) for the same scope, in case one of the partners were to fail? I've read online (repeatedly) that ISC's DHCP is able to maintain the same lease table across multiple servers, in order to solve this problem. Does anyone have any experience or advice to regarding that?

    Read the article

  • How to configure DNS Server on Fedora

    - by user863873
    I want to learn how to configure my home PC server into a web server with domain and host. My IP is 109.99.141.133 and now points to a phpinfo page host on my home server. My registed domain is: anunta-anunturi.ro I searched for a tutorial and I've read that I have to configure /etc/named.conf and the file sources for the new zone that I create. So, from the tutorials, my /etc/named.conf looks like this: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "anunta-anunturi.ro" IN { type master; file "/etc/anunta-anunturi.db"; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; My /etc/anunta-anunturi.db file looks like this — I'm not sure if this is okay, or if it's the easy one. $TTL 86400 anunta-anunturi.ro. IN SOA serveur.anunta-anunturi.ro. root.serveur.anunta-anunturi.ro. ( 1997022700 ; Serial 28800 ; Refresh 14400 ; Retry 3600000 ; Expire 86400 ) ; Minumun IN NS serveur.anunta-anunturi.ro. IN MX 10 mail.anunta-anunturi.ro. serveur.anunta-anunturi.ro. IN A 192.168.1.37 www.anunta-anunturi.ro. IN A 192.168.1.37 mail.anunta-anunturi.ro. IN A 192.168.1.37 Extra info: At home I receive internet from my ISP through a router. My home PC and server recieve their IP automatically from the router when I start/restart. In my local home network, my server receives the IP 192.168.1.37 from the router. When I enter 109.99.141.133 in my browser, it points to the rooter that forwards port 80 to local IP 192.168.1.37 (my home server) Questions: Are my two files good? What/where is my nameserver that I need to copy/paste to my top level domain (where I registered my domain: rotld.ro)?

    Read the article

  • Initial Cisco ASA 5510 Config

    - by Brendan ODonnell
    Fair warning, I'm a but of a noob so please bear with me. I'm trying to set up a new ASA 5510. I have a pretty simple set up with one /24 on the inside NATed to a DHCP address on the outside. Everything on the inside works and I can ping the outside interface from external devices. No matter what I do I can't get anything internal to route across the border to the outside and back. To try and eliminate ACL issues as a possibility I added permit any any rules to the incoming access lists on the inside and outside interfaces. I'd appreciate any help I can get. Here's the sh run. : Saved : ASA Version 8.4(3) ! hostname gateway domain-name xxx.local enable password xxx encrypted passwd xxx encrypted names ! interface Ethernet0/0 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/1 nameif inside security-level 100 ip address 10.x.x.x 255.255.255.0 ! interface Ethernet0/2 shutdown no nameif no security-level no ip address ! interface Ethernet0/3 shutdown no nameif no security-level no ip address ! interface Management0/0 nameif management security-level 100 ip address 192.168.1.1 255.255.255.0 management-only ! ftp mode passive dns domain-lookup inside dns server-group DefaultDNS name-server 10.x.x.x domain-name xxx.local same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-network subnet 10.x.x.x 255.255.255.0 object-group protocol TCPUDP protocol-object udp protocol-object tcp access-list outside_access_in extended permit ip any any access-list inside_access_in extended permit ip any any pager lines 24 logging enable logging buffered informational logging asdm informational mtu management 1500 mtu inside 1500 mtu outside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside icmp permit any outside no asdm history enable arp timeout 14400 ! object network inside-network nat (any,outside) dynamic interface access-group inside_access_in in interface inside access-group outside_access_in in interface outside timeout xlate 3:00:00 timeout pat-xlate 0:00:30 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy user-identity default-domain LOCAL aaa authentication ssh console LOCAL aaa authentication http console LOCAL http server enable http 192.168.1.0 255.255.255.0 management http 10.x.x.x 255.255.255.0 inside http authentication-certificate management http authentication-certificate inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart warmstart telnet timeout 5 ssh 192.168.1.0 255.255.255.0 management ssh 10.x.x.x 255.255.255.0 inside ssh timeout 5 ssh version 2 console timeout 0 dhcp-client client-id interface outside dhcpd address 192.168.1.2-192.168.1.254 management dhcpd enable management ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn username xxx password xxx encrypted ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options inspect icmp ! service-policy global_policy global prompt hostname context no call-home reporting anonymous Cryptochecksum:fe19874e18fe7107948eb0ada6240bc2 : end no asdm history enable

    Read the article

  • Apache cyclic redirection problem

    - by slicedlime
    I have an extremely weird problem with one of my sites. I run a number of blogs off a single apache2 server with a shared wordpress install. Each site has a www.domain.com main domain, but a ServerAlias of domain.com. This works fine for all the blogs except one, which instead of redirecting to www.domain.com redirects to domain.com, causing a cyclic redirection. The configuration for each host looks like this: <VirtualHost *:80> ServerName www.domain.com ServerAlias domain.com DocumentRoot "/home/www/www.domain.com" <Directory "/home/www/www.domain.com"> Options MultiViews Indexes Includes FollowSymLinks ExecCGI AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> As this didn't work, I tried a mod_rewrite rule for it, which still didn't redirect correctly. The weird thing here is that if i rewrite it to redirect to any other domain it will redirect correctly, even to another subdomain. So a rewrite rule for domain.com that redirects to foo.domain.com works, but not to www.domain.com. In the same way, trying to redirec to www.domain.com/foo/ ends me up with a redirection to domain.com/foo/. Even weirder, I tried setting up domain.com as a completely separate virtual host, and ran this php test script as index.php on it: <?php header('Location: http://www.domain.com/' . $_SERVER["request_uri"]); ?> Hitting domain.com still redirects to domain.com! Checking out the headers sent to the server verifies that I get exactly the redirect URL I wanted, except with the "www." stripped. The same script works like a charm if I replace www. with foo or redirect to any other domain. This has now foiled me for a long time. I've diffed the vhosts configs for a working domain and the faulty one, and the only difference is the domain name itself. I've diffed the .htaccess files for both sites, and the only difference is a path related to the sharing of wordpress installation for the blogs: php_value include_path ".:/home/www/www.domain.com/local/:/home/www/www.domain.com/" I searched through everything in /etc (including apache conf) for the domain name of the faulty host and found nothing weird, searched through everything in /etc for one of the working ones to make sure it didn't differ, I even went so far to check on the DNS setup of two domains to make sure there wasn't anything strange going on. Here's the response for the faulty one: user@localhost dir $ wget -S domain.com --2010-03-20 21:47:24-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:47:24 GMT Location: http://domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://domain.com/ [following] And a working one: user@localhost dir $ wget -S domain.com --2010-03-20 21:51:33-- http://domain.com/ Resolving domain.com... x.x.x.x Connecting to domain.com|x.x.x.x|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 301 Moved Permanently Via: 1.1 ISA Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 0 Date: Sat, 20 Mar 2010 20:51:33 GMT Location: http://www.domain.com/ Content-Type: text/html; charset=UTF-8 Server: Apache X-Powered-By: PHP/5.2.10-pl0-gentoo X-Pingback: http://www.domain.com/xmlrpc.php Keep-Alive: timeout=15, max=100 Location: http://www.domain.com/ [following] I'm stumped. I've had this problem for a long time, and I feel like I've tried everything. I can't see why the different domains would act differently under the same installation with the same settings. Help :(

    Read the article

  • Is Apache 2.2.22 able to sustain 1.000 simultaneous connected clients?

    - by Fnux
    For an article in a news paper, I'm benchmarking 5 different web servers (Apache2, Cherokee, Lighttpd, Monkey and Nginx). The tests made consist of measuring the execution times as well as different parameters such as the number of request served per second, the amount of RAM, the CPU used, during a growing load of simultaneous clients (from 1 to 1.000 with a step of 10) each client sending 1.000.000 requets of a small fixed file, then of a medium fixed file, then a small dynamic content (hello.php) and finally a complex dynamic content (the computation of the reimbursment of a loan). All the web servers are able to sustain such a load (up to 1.000 clients) but Apache2 which always stops to respond when the test reach 450 to 500 simultaneous clients. My configuration is : CPU: AMD FX 8150 8 cores @ 4.2 GHz RAM: 32 Gb. SSD: 2 x Crucial 240 Gb SATA6 OS: Ubuntu 12.04.3 64 bit WS: Apache 2.2.22 My Apache2 configuration is as follows: /etc/apache2/apache2.conf LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 30 KeepAlive On MaxKeepAliveRequests 1000000 KeepAliveTimeout 2 ServerName "fnux.net" <IfModule mpm_prefork_module> StartServers 16 MinSpareServers 16 MaxSpareServers 16 ServerLimit 2048 MaxClients 1024 MaxRequestsPerChild 0 </IfModule> User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .htaccess <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> DefaultType None HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ /etc/apache2/ports.conf NameVirtualHost *:8180 Listen 8180 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> /etc/apache2/mods-available <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /cgi-bin/php5.external <Location "/cgi-bin/php5.external"> Order Deny,Allow Deny from All Allow from env=REDIRECT_STATUS </Location> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:8180> ServerAdmin webmaster@localhost DocumentRoot /var/www/apache2 <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel emerg ##### CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> </VirtualHost> /etc/security/limits.conf * soft nofile 1000000 * hard nofile 1000000 So, I would trully appreciate your advice to setup Apache2 to make it able to sustain 1.000 simultaneous clients, if this is even possible. TIA for your help. Cheers.

    Read the article

  • Mac OS X 10.8 VPN Server: Bypass VPN for LAN traffic (routing LAN traffic to secondary connection)

    - by Dan Robson
    I have somewhat of an odd setup for a VPN server with OS X Mountain Lion. It's essentially being used as a bridge to bypass my company's firewall to our extranet connection - certain things our team needs to do require unfettered access to the outside, and changing IT policies to allow traffic through the main firewall is just not an option. The extranet connection is provided through a Wireless-N router (let's call it Wi-Fi X). My Mac Mini server is configured with the connection to this router as the primary connection, thus unfettered access to the internet via the router. Connections to this device on the immediate subnet are possible through the LAN port, but outside the subnet things are less reliable. I was able to configure the VPN server to provide IP addresses to clients in the 192.168.11.150-192.168.11.200 range using both PPTP and L2TP, and I'm able to connect to the extranet through the VPN using the standard Mac OS X VPN client in System Preferences, however unsurprisingly, a local address (let's call it internal.company.com) returns nothing. I tried to bypass the limitation of the VPN Server by setting up Routes in the VPN settings. Our company uses 13.x.x.x for all internal traffic, instead of 10.x.x.x, so the routing table looked something like this: IP Address ---------- Subnet Mask ---------- Configuration 0.0.0.0 248.0.0.0 Private 8.0.0.0 252.0.0.0 Private 12.0.0.0 255.0.0.0 Private 13.0.0.0 255.0.0.0 Public 14.0.0.0 254.0.0.0 Private 16.0.0.0 240.0.0.0 Private 32.0.0.0 224.0.0.0 Private 64.0.0.0 192.0.0.0 Private 128.0.0.0 128.0.0.0 Private I was under the impression that if nothing was entered here, all traffic was routed through the VPN. With something entered, only traffic specifically marked to go through the VPN would go through the VPN, and all other traffic would be up to the client to access using its own default connection. This is why I had to specifically mark every subnet except 13.x.x.x as Private. My suspicion is that since I can't reach the VPN server from outside the local subnet, it's not making a connection to the main DNS server and thus can't be reached on the larger network. I'm thinking that entering hostnames like internal.company.com aren't kicked back to the client to resolve, because the server has no idea that the IP address falls in the public range, since I suspect (probably should ping test it but don't have access to it right now) that it can't reach the DNS server to find out anything about that hostname. It seems to me that all my options for resolving this all boil down to the same type of solution: Figure out how to reach the DNS with the secondary connection on the server. I'm thinking that if I'm able to do [something] to get my server to recognize that it should also check my local gateway (let's say Server IP == 13.100.100.50 and Gateway IP == 13.100.100.1). From there Gateway IP can tell me to go find DNS Server at 13.1.1.1 and give me information about my internal network. I'm very confused about this path -- really not sure if I'm even making sense. I thought about trying to do this client side, but that doesn't make sense either, since that would add time to each and every client side setup. Plus, it just seems more logical to solve it on the server - I could either get rid of my routing table altogether or keep it - I think the only difference would be that internal traffic would also go through the server - probably an unnecessary burden on it. Any help out there? Or am I in over my head? Forward proxy or transparent proxy is also an option for me, although I have no idea how to set either of those up. (I know, Google is my friend.)

    Read the article

  • Strange enduser experience with Liferay, Glassfish and Apache on RedHat

    - by Pete Helgren
    Tried multiple forums to get to the bottom of this. I hope I can get some direction here: Here is the stack I am working with: Red Hat Enterprise Linux Server release 5.6 (Tikanga) Liferay 6.0.6 on Glassfish 3.0.1 MySQL 5.0.77 Apache 2.2.3 The Liferay portal provides a variety of portlets to end users. Static content (web pages), static resources (primarily pdf and mp3 files 1mb - 80mb in size), File upload and download capabilities (primarily 40-60mb mp3 files) and online streaming of those MP3 files. Here is the strange end user experiences: Under normal load: (20-30) users uploading, downloading or streaming files and 20-30 accessing static content (some of it downloads), we see the following: 1) Clicking a link triggers the download of a portion of an MP3 (the portion is a few seconds long). 2) Clicking on a link tiggers the download of the page content rather than rendering. 3) Clicking a link causes the page to dump binary data to the end user rather than the expected content. 4) Clicking a link returns the text of a javascript file rather than rendering the page. Each occurrence is totally random (or appears so). Sometimes it works, sometimes it doesn't. It seems to have no relation to browser or client OS. The strange events seem to occur much more frequently when using an SSL connection rather than regular http. Apache serves as a proxy server only (reverse). It basically passes all the requests through to Glassfish. There isn't any static content proxy served by Apache. We rebuilt the entire stack from scratch and redeployed the portlet wars and still have the same issues. Liferay is running as a single server (not clustered). We disabled mod_cache in Apache. The problems are more frequent as the server load grows. This morning the load is pretty light and we are seeing few problems but the use of the site will grow, particularly tonight around 9pm CST through Wednesday morning. You could try the site (http://preview.bsfinternational.org) during those times and I would expect that you might experience one of the weirdnesses as you randomly click links on the site (https is invoked only when signed in). Again, https seems to exacerbate the issue. This seems very much like a caching issue but I don't know where in the stack to start peeling the onion. Apache? Liferay? Glassfish? MySQL? Maybe even Redhat? We are stumped and most forums we have posted to (LifeRay and Glassfish) have returned very few suggestions. I just need an idea of where to start looking. I understand that we could have a portlet EDIT: Opening the files in a Hex editor that appear to be pages that download rather than render, we see that the first 4000 characters are "junk" and then the "HTTP/1.1 ...." 'normal' header is seen. So something is dumping a jumble of characters up to offset 4000 (when viewing it in a Hex editor). Perhaps a clue? Ideas?

    Read the article

  • Samba with remote LDAP authentication doesn`t see users properly

    - by LucasBr
    I'm trying to setup a samba server authenticated by a remote LDAP server, and I'm having some problems that I can't figure how to solve. I was able to make an getent passwd at samba server and I could see all users at ldapserver, but when I tried to access \\SAMBASERVER at my windows box I had this at the /var/log/samba/log.mywindowsbox: <...snip...> [2012/10/19 13:05:22.449684, 2] smbd/sesssetup.c:1413(setup_new_vc_session) setup_new_vc_session: New VC == 0, if NT4.x compatible we would close all old resources. [2012/10/19 13:05:22.449692, 3] smbd/sesssetup.c:1212(reply_sesssetup_and_X_spnego) Doing spnego session setup [2012/10/19 13:05:22.449701, 3] smbd/sesssetup.c:1254(reply_sesssetup_and_X_spnego) NativeOS=[] NativeLanMan=[] PrimaryDomain=[] [2012/10/19 13:05:22.449717, 3] libsmb/ntlmssp.c:747(ntlmssp_server_auth) Got user=[lucas] domain=[BUSINESS] workstation=[MYWINDOWSBOX] len1=24 len2=24 [2012/10/19 13:05:22.449747, 3] auth/auth.c:216(check_ntlm_password) check_ntlm_password: Checking password for unmapped user [BUSINESS]\[lucas]@[MYWINDOWSBOX] with the new password interface [2012/10/19 13:05:22.449759, 3] auth/auth.c:219(check_ntlm_password) check_ntlm_password: mapped user is: [SAMBASERVER]\[lucas]@[MYWINDOWSBOX] [2012/10/19 13:05:22.449773, 3] smbd/sec_ctx.c:210(push_sec_ctx) push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449783, 3] smbd/uid.c:429(push_conn_ctx) push_conn_ctx(0) : conn_ctx_stack_ndx = 0 [2012/10/19 13:05:22.449791, 3] smbd/sec_ctx.c:310(set_sec_ctx) setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449922, 2] lib/smbldap.c:950(smbldap_open_connection) smbldap_open_connection: connection opened [2012/10/19 13:05:23.001517, 3] lib/smbldap.c:1166(smbldap_connect_system) ldap_connect_system: successful connection to the LDAP server [2012/10/19 13:05:23.007713, 3] smbd/sec_ctx.c:418(pop_sec_ctx) pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0 [2012/10/19 13:05:23.007733, 3] auth/auth_sam.c:399(check_sam_security) check_sam_security: Couldn't find user 'lucas' in passdb. [2012/10/19 13:05:23.007743, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [lucas] -> [lucas] FAILED with error NT_STATUS_NO_SUCH_USER [2012/10/19 13:05:23.007760, 3] smbd/error.c:80(error_packet_set) error packet at smbd/sesssetup.c(111) cmd=115 (SMBsesssetupX) NT_STATUS_LOGON_FAILURE [2012/10/19 13:05:23.010469, 3] smbd/process.c:1489(process_smb) Transaction 3 of length 142 (0 toread) <...snip...> /etc/samba/smb.conf file follows: [global] dos charset = 850 unix charset = LOCALE workgroup = BUSINESS netbios name = SAMBASERVER bind interfaces only = true interfaces = lo eth0 eth1 smb ports = 139 hosts deny = All hosts allow = 192.168.78. 192.168.255. 127.0.0.1 10.149.122. 192.168.0. name resolve order = wins bcast hosts log level = 3 syslog = 0 log file = /var/log/samba/log.%m max log size = 100000 domain logons = No wins support = Yes wins proxy = No client ntlmv2 auth = Yes lanman auth = Yes ntlm auth = Yes dns proxy = Yes time server = Yes security = user encrypt passwords = Yes obey pam restrictions = Yes ldap password sync = Yes unix password sync = Yes passdb backend = ldapsam:"ldap://192.168.78.206" ldap ssl = off ldap admin dn = uid=root,ou=Users,dc=business,dc=intranet ldap suffix = ldap group suffix = ou=Groups ldap user suffix = ou=Users ldap machine suffix = ou=Computers ldap idmap suffix = ou=Idmap ldap delete dn = Yes add user script = /usr/sbin/smbldap-useradd -m "%u" delete user script = /usr/sbin/smbldap-userdel "%u" add group script = /usr/sbin/smbldap-groupadd -p "%g" delete group script = /usr/sbin/smbldap-groupdel "%g" add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g" delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g" set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u" add machine script = /usr/sbin/smbldap-useradd -W -t5 "%u" idmap backend = ldap:"ldap://192.168.78.206" idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 load printers = No printcap name = /dev/null map acl inherit = Yes map untrusted to domain = Yes enable privileges = Yes veto files = /lost+found/ /publicftp/ So, \\SAMBASERVER says he couldn't find my user, but I can see it by getent passwd . What I can do in order to SAMBASERVER see and authenticate my user? Thanks in advance!

    Read the article

  • dns server bind is not work

    - by milad
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; };` AND The content of "/var/named/data/named.mydomain.com": $TTL 38400 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 2012101201 ; serial number YYMMDDNN 28800 ; Refresh 7200 ; Retry 864000 ; Expire 38400 ; Min TTL ) mydomain.com. IN A 1.2.3.4 www IN A 1.2.3.4 ns1.mydomain.com. IN A 1.2.3.4 ns2.mydomain.com. IN A 1.2.3.4 mydomain.com. IN NS ns1.mydomain.com. mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running... Thanks for your answers. i know that the ping is not the job of bind, i use it just to check whether domain is pointed to host or not.(ping is open in my server as i got reply in pinging ip) i use network-tools.com to ping domain. here the output of dig utility: dig mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 <<>> mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 6806 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;mydomain.com. IN A ;; Query time: 321 msec ;; SERVER: 5.6.7.8#53(5.6.7.8)##note that 5.6.7.8 is my idc dns ip ;; WHEN: Sun Oct 14 23:53:47 2012

    Read the article

  • php-fpm + persistent sockets = 502 bad gateway

    - by leeoniya
    Put on your reading glasses - this will be a long-ish one. First, what I'm doing. I'm building a web-app interface for some particularly slow tcp devices. Opening a socket to them takes 200ms and an fwrite/fread cycle takes another 300ms. To reduce the need for both of these actions on each request, I'm opening a persistent tcp socket which reduces the response time by the aforementioned 200ms. I was hoping PHP-FPM would share the persistent connections between requests from different clients (and indeed it does!), but there are some issues which I havent been able to resolve after 2 days of interneting, reading logs and modifying settings. I have somewhat narrowed it down though. Setup: Ubuntu 13.04 x64 Server (fully updated) on Linode PHP 5.5.0-6~raring+1 (fpm-fcgi) nginx/1.5.2 Relevent config: nginx worker_processes 4; php-fpm/pool.d pm = dynamic pm.max_children = 2 pm.start_servers = 2 pm.min_spare_servers = 2 Let's go from coarse to fine detail of what happens. After a fresh start I have 4x nginx processes and 2x php5-fpm processes waiting to handle requests. Then I send requests every couple seconds to the script. The first take a while to open the socket connection and returns with the data in about 500ms, the second returns data in 300ms (yay it's re-using the socket), the third also succeeds in about 300ms, the fourth request = 502 Bad Gateway, same with the 5th. Sixth request once again returns data, except now it took 500ms again. The process repeats for several cycles after which every 4 requests result in 2x 502 Bad Gateways and 2x 500ms Data responses. If I double all the fpm pool values and have 4x php-fpm processes running, the cycles settles in with 4x successful 500ms responses followed by 4x Bad Gateway errors. If I don't use persistent sockets, this issue goes away but then every request is 500ms. What I suspect is happening is the persistent socket keeps each php-fpm process from idling and ties it up, so the next one gets chosen until none are left and as they error out, maybe they are restarted and become available on the next round-robin loop ut the socket dies with the process. I haven't yet checked the 'slowlog', but the nginx error log shows lots of this: *188 recv() failed (104: Connection reset by peer) while reading response header from upstream, client:... All the suggestions on the internet regarding fixing nginx/php-fpm/502 bad gateway relate to high load or fcgi_pass misconfiguration. This is not the case here. Increasing buffers/sizes, changing timeouts, switching from unix socket to tcp socket for fcgi_pass, upping connection limits on the system....none of this stuff applies here. I've had some other success with setting pm = ondemand rather than dynamic, but as soon as the initial fpm-process gets killed off after idling, the persistent socket is gone for all subsequent php-fpm spawns. For the php script, I'm using stream_socket_client() with a STREAM_CLIENT_PERSISTENT flag. A while/stream_select() loop to detect socket data and fread($sock, 4096) to grab the data. I don't call fclose() obviously. If anyone has some additional questions or advice on how to get a persistent socket without tying up the php-fpm processes beyond the request completion, or maybe some other things to try, I'd appreciate it. some useful links: Nginx + php-fpm - recv() error Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server) Nginx + PHP-FPM "error 104 Connection reset by peer" causes occasional duplicate posts http://www.linuxquestions.org/questions/programming-9/php-pfsockopen-552084/ http://stackoverflow.com/questions/14268018/concurrent-use-of-a-persistent-php-socket http://devzone.zend.com/303/extension-writing-part-i-introduction-to-php-and-zend/#Heading3 http://stackoverflow.com/questions/242316/how-to-keep-a-php-stream-socket-alive http://php.net/manual/en/install.fpm.configuration.php https://www.google.com/search?q=recv%28%29+failed+%28104:+Connection+reset+by+peer%29+while+reading+response+header+from+upstream+%22502%22&ei=mC1XUrm7F4WQyAHbv4H4AQ&start=10&sa=N&biw=1920&bih=953&dpr=1

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • Samba doesnt require password on xbmc but does on ubuntu

    - by Chris
    I have samba setup on a fedora 13 machine, and I use it to share with my xbmc client in the family room. When I set this up there no password or anything was required I merely entered in paths such as: smb://<host>/<share> and all worked. Now on my ubuntu 10.04 machine when I try to access the same hosts, for example through smbmount though I receive an error. smbmount //media/Music ~/Music/ # media is in my /etc/hosts and resolves to # correct IP address for the machine I receive error: operation not permitted after pressing enter when it prompts for password. Here is my entry from /etc/samba/smb.conf: [global] workgroup = WORKGROUP server string = Samba Server Version %v # log files split per-machine: log file = /var/log/samba/log.%m # maximum size of 50KB per log file, then rotate: max log size = 50 security = user passdb backend = tdbsam ; security = domain ; passdb backend = tdbsam ; realm = MY_REALM ; password server = <NT-Server-Name> ; security = user ; passdb backend = tdbsam ; domain master = yes ; domain logons = yes ; logon script = %m.bat ; logon script = %u.bat ; logon path = \\%L\Profiles\%u ; logon path = ; add user script = /usr/sbin/useradd "%u" -n -g users ; add group script = /usr/sbin/groupadd "%g" ; add machine script = /usr/sbin/useradd -n -c "Workstation (%u)" -M -d /nohome -s /bin/false "%u" ; delete user script = /usr/sbin/userdel "%u" ; delete user from group script = /usr/sbin/userdel "%u" "%g" ; delete group script = /usr/sbin/groupdel "%g" ; local master = no ; os level = 33 ; preferred master = yes ; wins support = yes ; wins server = w.x.y.z ; wins proxy = yes ; dns proxy = yes load printers = yes cups options = raw ; printcap name = /etc/printcap # obtain a list of printers automatically on UNIX System V systems: ; printcap name = lpstat ; printing = cups ; map archive = no ; map hidden = no ; map read only = no ; map system = no ; store dos attributes = yes #============================ Share Definitions ============================== [homes] comment = Home Directories browseable = no writable = yes ; valid users = %S ; valid users = MYDOMAIN\%S # Un-comment the following and create the netlogon directory for Domain Logons: ; [netlogon] ; comment = Network Logon Service ; path = /var/lib/samba/netlogon ; guest ok = yes ; writable = no ; share modes = no # Un-comment the following to provide a specific roving profile share. # The default is to use the user's home directory: ; [Profiles] ; path = /var/lib/samba/profiles ; browseable = no ; guest ok = yes # A publicly accessible directory that is read only, except for users in the # "staff" group (which have write permissions): ; [public] ; comment = Public Stuff ; path = /home/samba ; public = yes ; writable = yes ; printable = no ; write list = +staff [tv] comment = TV path = /media/Isos/tv public = yes writable = yes printable = no write list = +media [music] comment = Music path = /media/Storage/music/ public = yes writable = yes printable = no write list = +media [pictures] comment = Pictures path = /media/Storage/pictures public = yes writable = yes printable = no write list = +media

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • dns server bind is not work [closed]

    - by user1742080
    I just installed bind on RHEL 6 and point a domain to that server. but actually when i ping domain it returns error 1214: Here is my named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "mydomain.com"{ type master; file "/var/named/data/named.mydomain.com"; allow-update { none; }; }; AND The content of "/var/named/data/named.mydomain.com": 1 $TTL 38400 2 3 mydomain.com. IN SOA ns1.mydomain.com. milad.yahoo.com. ( 4 2012101201 ; serial number YYMMDDNN 5 28800 ; Refresh 6 7200 ; Retry 7 864000 ; Expire 8 38400 ; Min TTL 9 ) 10 11 mydomain.com. IN A 1.2.3.4 12 www IN A 1.2.3.4 13 ns1.mydomain.com. IN A 1.2.3.4 14 ns2.mydomain.com. IN A 1.2.3.4 15 mydomain.com. IN NS ns1.mydomain.com. 16 mydomain.com. IN NS ns2.mydomain.com. AND i'm sure the named service is running: [root@server ~]# service named status version: 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.3 CPUs found: 8 worker threads: 8 number of zones: 20 debug level: 0 xfers running: 0 xfers deferred: 0 soa queries in progress: 0 query logging is OFF recursive clients: 0/0/1000 tcp clients: 0/100 server is up and running named (pid 26299) is running...

    Read the article

  • Custom ASP.NET Routing to an HttpHandler

    - by Rick Strahl
    As of version 4.0 ASP.NET natively supports routing via the now built-in System.Web.Routing namespace. Routing features are automatically integrated into the HtttpRuntime via a few custom interfaces. New Web Forms Routing Support In ASP.NET 4.0 there are a host of improvements including routing support baked into Web Forms via a RouteData property available on the Page class and RouteCollection.MapPageRoute() route handler that makes it easy to route to Web forms. To map ASP.NET Page routes is as simple as setting up the routes with MapPageRoute:protected void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } void RegisterRoutes(RouteCollection routes) { routes.MapPageRoute("StockQuote", "StockQuote/{symbol}", "StockQuote.aspx"); routes.MapPageRoute("StockQuotes", "StockQuotes/{symbolList}", "StockQuotes.aspx"); } and then accessing the route data in the page you can then use the new Page class RouteData property to retrieve the dynamic route data information:public partial class StockQuote1 : System.Web.UI.Page { protected StockQuote Quote = null; protected void Page_Load(object sender, EventArgs e) { string symbol = RouteData.Values["symbol"] as string; StockServer server = new StockServer(); Quote = server.GetStockQuote(symbol); // display stock data in Page View } } Simple, quick and doesn’t require much explanation. If you’re using WebForms most of your routing needs should be served just fine by this simple mechanism. Kudos to the ASP.NET team for putting this in the box and making it easy! How Routing Works To handle Routing in ASP.NET involves these steps: Registering Routes Creating a custom RouteHandler to retrieve an HttpHandler Attaching RouteData to your HttpHandler Picking up Route Information in your Request code Registering routes makes ASP.NET aware of the Routes you want to handle via the static RouteTable.Routes collection. You basically add routes to this collection to let ASP.NET know which URL patterns it should watch for. You typically hook up routes off a RegisterRoutes method that fires in Application_Start as I did in the example above to ensure routes are added only once when the application first starts up. When you create a route, you pass in a RouteHandler instance which ASP.NET caches and reuses as routes are matched. Once registered ASP.NET monitors the routes and if a match is found just prior to the HttpHandler instantiation, ASP.NET uses the RouteHandler registered for the route and calls GetHandler() on it to retrieve an HttpHandler instance. The RouteHandler.GetHandler() method is responsible for creating an instance of an HttpHandler that is to handle the request and – if necessary – to assign any additional custom data to the handler. At minimum you probably want to pass the RouteData to the handler so the handler can identify the request based on the route data available. To do this you typically add  a RouteData property to your handler and then assign the property from the RouteHandlers request context. This is essentially how Page.RouteData comes into being and this approach should work well for any custom handler implementation that requires RouteData. It’s a shame that ASP.NET doesn’t have a top level intrinsic object that’s accessible off the HttpContext object to provide route data more generically, but since RouteData is directly tied to HttpHandlers and not all handlers support it it might cause some confusion of when it’s actually available. Bottom line is that if you want to hold on to RouteData you have to assign it to a custom property of the handler or else pass it to the handler via Context.Items[] object that can be retrieved on an as needed basis. It’s important to understand that routing is hooked up via RouteHandlers that are responsible for loading HttpHandler instances. RouteHandlers are invoked for every request that matches a route and through this RouteHandler instance the Handler gains access to the current RouteData. Because of this logic it’s important to understand that Routing is really tied to HttpHandlers and not available prior to handler instantiation, which is pretty late in the HttpRuntime’s request pipeline. IOW, Routing works with Handlers but not with earlier in the pipeline within Modules. Specifically ASP.NET calls RouteHandler.GetHandler() from the PostResolveRequestCache HttpRuntime pipeline event. Here’s the call stack at the beginning of the GetHandler() call: which fires just before handler resolution. Non-Page Routing – You need to build custom RouteHandlers If you need to route to a custom Http Handler or other non-Page (and non-MVC) endpoint in the HttpRuntime, there is no generic mapping support available. You need to create a custom RouteHandler that can manage creating an instance of an HttpHandler that is fired in response to a routed request. Depending on what you are doing this process can be simple or fairly involved as your code is responsible based on the route data provided which handler to instantiate, and more importantly how to pass the route data on to the Handler. Luckily creating a RouteHandler is easy by implementing the IRouteHandler interface which has only a single GetHttpHandler(RequestContext context) method. In this method you can pick up the requestContext.RouteData, instantiate the HttpHandler of choice, and assign the RouteData to it. Then pass back the handler and you’re done.Here’s a simple example of GetHttpHandler() method that dynamically creates a handler based on a passed in Handler type./// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } Note that this code checks for a specific type of handler and if it matches assigns the RouteData to this handler. This is optional but quite a common scenario if you want to work with RouteData. If the handler you need to instantiate isn’t under your control but you still need to pass RouteData to Handler code, an alternative is to pass the RouteData via the HttpContext.Items collection:IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; requestContext.HttpContext.Items["RouteData"] = requestContext.RouteData; return handler; } The code in the handler implementation can then pick up the RouteData from the context collection as needed:RouteData routeData = HttpContext.Current.Items["RouteData"] as RouteData This isn’t as clean as having an explicit RouteData property, but it does have the advantage that the route data is visible anywhere in the Handler’s code chain. It’s definitely preferable to create a custom property on your handler, but the Context work-around works in a pinch when you don’t’ own the handler code and have dynamic code executing as part of the handler execution. An Example of a Custom RouteHandler: Attribute Based Route Implementation In this post I’m going to discuss a custom routine implementation I built for my CallbackHandler class in the West Wind Web & Ajax Toolkit. CallbackHandler can be very easily used for creating AJAX, REST and POX requests following RPC style method mapping. You can pass parameters via URL query string, POST data or raw data structures, and you can retrieve results as JSON, XML or raw string/binary data. It’s a quick and easy way to build service interfaces with no fuss. As a quick review here’s how CallbackHandler works: You create an Http Handler that derives from CallbackHandler You implement methods that have a [CallbackMethod] Attribute and that’s it. Here’s an example of an CallbackHandler implementation in an ashx.cs based handler:// RestService.ashx.cs public class RestService : CallbackHandler { [CallbackMethod] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } } CallbackHandler makes it super easy to create a method on the server, pass data to it via POST, QueryString or raw JSON/XML data, and then retrieve the results easily back in various formats. This works wonderful and I’ve used these tools in many projects for myself and with clients. But one thing missing has been the ability to create clean URLs. Typical URLs looked like this: http://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuote&symbol=msfthttp://www.west-wind.com/WestwindWebToolkit/samples/Rest/StockService.ashx?Method=GetStockQuotes&symbolList=msft,intc,gld,slw,mwe&format=xml which works and is clear enough, but also clearly very ugly. It would be much nicer if URLs could look like this: http://www.west-wind.com//WestwindWebtoolkit/Samples/StockQuote/msfthttp://www.west-wind.com/WestwindWebtoolkit/Samples/StockQuotes/msft,intc,gld,slw?format=xml (the Virtual Root in this sample is WestWindWebToolkit/Samples and StockQuote/{symbol} is the route)(If you use FireFox try using the JSONView plug-in make it easier to view JSON content) So, taking a clue from the WCF REST tools that use RouteUrls I set out to create a way to specify RouteUrls for each of the endpoints. The change made basically allows changing the above to: [CallbackMethod(RouteUrl="RestService/StockQuote/{symbol}")] public StockQuote GetStockQuote(string symbol) { StockServer server = new StockServer(); return server.GetStockQuote(symbol); } [CallbackMethod(RouteUrl = "RestService/StockQuotes/{symbolList}")] public StockQuote[] GetStockQuotes(string symbolList) { StockServer server = new StockServer(); string[] symbols = symbolList.Split(new char[2] { ',',';' },StringSplitOptions.RemoveEmptyEntries); return server.GetStockQuotes(symbols); } where a RouteUrl is specified as part of the Callback attribute. And with the changes made with RouteUrls I can now get URLs like the second set shown earlier. So how does that work? Let’s find out… How to Create Custom Routes As mentioned earlier Routing is made up of several steps: Creating a custom RouteHandler to create HttpHandler instances Mapping the actual Routes to the RouteHandler Retrieving the RouteData and actually doing something useful with it in the HttpHandler In the CallbackHandler routing example above this works out to something like this: Create a custom RouteHandler that includes a property to track the method to call Set up the routes using Reflection against the class Looking for any RouteUrls in the CallbackMethod attribute Add a RouteData property to the CallbackHandler so we can access the RouteData in the code of the handler Creating a Custom Route Handler To make the above work I created a custom RouteHandler class that includes the actual IRouteHandler implementation as well as a generic and static method to automatically register all routes marked with the [CallbackMethod(RouteUrl="…")] attribute. Here’s the code:/// <summary> /// Route handler that can create instances of CallbackHandler derived /// callback classes. The route handler tracks the method name and /// creates an instance of the service in a predictable manner /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler type</typeparam> public class CallbackHandlerRouteHandler : IRouteHandler { /// <summary> /// Method name that is to be called on this route. /// Set by the automatically generated RegisterRoutes /// invokation. /// </summary> public string MethodName { get; set; } /// <summary> /// The type of the handler we're going to instantiate. /// Needed so we can semi-generically instantiate the /// handler and call the method on it. /// </summary> public Type CallbackHandlerType { get; set; } /// <summary> /// Constructor to pass in the two required components we /// need to create an instance of our handler. /// </summary> /// <param name="methodName"></param> /// <param name="callbackHandlerType"></param> public CallbackHandlerRouteHandler(string methodName, Type callbackHandlerType) { MethodName = methodName; CallbackHandlerType = callbackHandlerType; } /// <summary> /// Retrieves an Http Handler based on the type specified in the constructor /// </summary> /// <param name="requestContext"></param> /// <returns></returns> IHttpHandler IRouteHandler.GetHttpHandler(RequestContext requestContext) { IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; // If we're dealing with a Callback Handler // pass the RouteData for this route to the Handler if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; } /// <summary> /// Generic method to register all routes from a CallbackHandler /// that have RouteUrls defined on the [CallbackMethod] attribute /// </summary> /// <typeparam name="TCallbackHandler">CallbackHandler Type</typeparam> /// <param name="routes"></param> public static void RegisterRoutes<TCallbackHandler>(RouteCollection routes) { // find all methods var methods = typeof(TCallbackHandler).GetMethods(BindingFlags.Instance | BindingFlags.Public); foreach (var method in methods) { var attrs = method.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (attrs.Length < 1) continue; CallbackMethodAttribute attr = attrs[0] as CallbackMethodAttribute; if (string.IsNullOrEmpty(attr.RouteUrl)) continue; // Add the route routes.Add(method.Name, new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler)))); } } } The RouteHandler implements IRouteHandler, and its responsibility via the GetHandler method is to create an HttpHandler based on the route data. When ASP.NET calls GetHandler it passes a requestContext parameter which includes a requestContext.RouteData property. This parameter holds the current request’s route data as well as an instance of the current RouteHandler. If you look at GetHttpHandler() you can see that the code creates an instance of the handler we are interested in and then sets the RouteData property on the handler. This is how you can pass the current request’s RouteData to the handler. The RouteData object also has a  RouteData.RouteHandler property that is also available to the Handler later, which is useful in order to get additional information about the current route. In our case here the RouteHandler includes a MethodName property that identifies the method to execute in the handler since that value no longer comes from the URL so we need to figure out the method name some other way. The method name is mapped explicitly when the RouteHandler is created and here the static method that auto-registers all CallbackMethods with RouteUrls sets the method name when it creates the routes while reflecting over the methods (more on this in a minute). The important point here is that you can attach additional properties to the RouteHandler and you can then later access the RouteHandler and its properties later in the Handler to pick up these custom values. This is a crucial feature in that the RouteHandler serves in passing additional context to the handler so it knows what actions to perform. The automatic route registration is handled by the static RegisterRoutes<TCallbackHandler> method. This method is generic and totally reusable for any CallbackHandler type handler. To register a CallbackHandler and any RouteUrls it has defined you simple use code like this in Application_Start (or other application startup code):protected void Application_Start(object sender, EventArgs e) { // Register Routes for RestService CallbackHandlerRouteHandler.RegisterRoutes<RestService>(RouteTable.Routes); } If you have multiple CallbackHandler style services you can make multiple calls to RegisterRoutes for each of the service types. RegisterRoutes internally uses reflection to run through all the methods of the Handler, looking for CallbackMethod attributes and whether a RouteUrl is specified. If it is a new instance of a CallbackHandlerRouteHandler is created and the name of the method and the type are set. routes.Add(method.Name,           new Route(attr.RouteUrl, new CallbackHandlerRouteHandler(method.Name, typeof(TCallbackHandler) )) ); While the routing with CallbackHandlerRouteHandler is set up automatically for all methods that use the RouteUrl attribute, you can also use code to hook up those routes manually and skip using the attribute. The code for this is straightforward and just requires that you manually map each individual route to each method you want a routed: protected void Application_Start(objectsender, EventArgs e){    RegisterRoutes(RouteTable.Routes);}void RegisterRoutes(RouteCollection routes) { routes.Add("StockQuote Route",new Route("StockQuote/{symbol}",                     new CallbackHandlerRouteHandler("GetStockQuote",typeof(RestService) ) ) );     routes.Add("StockQuotes Route",new Route("StockQuotes/{symbolList}",                     new CallbackHandlerRouteHandler("GetStockQuotes",typeof(RestService) ) ) );}I think it’s clearly easier to have CallbackHandlerRouteHandler.RegisterRoutes() do this automatically for you based on RouteUrl attributes, but some people have a real aversion to attaching logic via attributes. Just realize that the option to manually create your routes is available as well. Using the RouteData in the Handler A RouteHandler’s responsibility is to create an HttpHandler and as mentioned earlier, natively IHttpHandler doesn’t have any support for RouteData. In order to utilize RouteData in your handler code you have to pass the RouteData to the handler. In my CallbackHandlerRouteHandler when it creates the HttpHandler instance it creates the instance and then assigns the custom RouteData property on the handler:IHttpHandler handler = Activator.CreateInstance(CallbackHandlerType) as IHttpHandler; if (handler is CallbackHandler) ((CallbackHandler)handler).RouteData = requestContext.RouteData; return handler; Again this only works if you actually add a RouteData property to your handler explicitly as I did in my CallbackHandler implementation:/// <summary> /// Optionally store RouteData on this handler /// so we can access it internally /// </summary> public RouteData RouteData {get; set; } and the RouteHandler needs to set it when it creates the handler instance. Once you have the route data in your handler you can access Route Keys and Values and also the RouteHandler. Since my RouteHandler has a custom property for the MethodName to retrieve it from within the handler I can do something like this now to retrieve the MethodName (this example is actually not in the handler but target is an instance pass to the processor): // check for Route Data method name if (target is CallbackHandler) { var routeData = ((CallbackHandler)target).RouteData; if (routeData != null) methodToCall = ((CallbackHandlerRouteHandler)routeData.RouteHandler).MethodName; } When I need to access the dynamic values in the route ( symbol in StockQuote/{symbol}) I can retrieve it easily with the Values collection (RouteData.Values["symbol"]). In my CallbackHandler processing logic I’m basically looking for matching parameter names to Route parameters: // look for parameters in the routeif(routeData != null){    string parmString = routeData.Values[parameter.Name] as string;    adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType);} And with that we’ve come full circle. We’ve created a custom RouteHandler() that passes the RouteData to the handler it creates. We’ve registered our routes to use the RouteHandler, and we’ve utilized the route data in our handler. For completeness sake here’s the routine that executes a method call based on the parameters passed in and one of the options is to retrieve the inbound parameters off RouteData (as well as from POST data or QueryString parameters):internal object ExecuteMethod(string method, object target, string[] parameters, CallbackMethodParameterType paramType, ref CallbackMethodAttribute callbackMethodAttribute) { HttpRequest Request = HttpContext.Current.Request; object Result = null; // Stores parsed parameters (from string JSON or QUeryString Values) object[] adjustedParms = null; Type PageType = target.GetType(); MethodInfo MI = PageType.GetMethod(method, BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (MI == null) throw new InvalidOperationException("Invalid Server Method."); object[] methods = MI.GetCustomAttributes(typeof(CallbackMethodAttribute), false); if (methods.Length < 1) throw new InvalidOperationException("Server method is not accessible due to missing CallbackMethod attribute"); if (callbackMethodAttribute != null) callbackMethodAttribute = methods[0] as CallbackMethodAttribute; ParameterInfo[] parms = MI.GetParameters(); JSONSerializer serializer = new JSONSerializer(); RouteData routeData = null; if (target is CallbackHandler) routeData = ((CallbackHandler)target).RouteData; int parmCounter = 0; adjustedParms = new object[parms.Length]; foreach (ParameterInfo parameter in parms) { // Retrieve parameters out of QueryString or POST buffer if (parameters == null) { // look for parameters in the route if (routeData != null) { string parmString = routeData.Values[parameter.Name] as string; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // GET parameter are parsed as plain string values - no JSON encoding else if (HttpContext.Current.Request.HttpMethod == "GET") { // Look up the parameter by name string parmString = Request.QueryString[parameter.Name]; adjustedParms[parmCounter] = ReflectionUtils.StringToTypedValue(parmString, parameter.ParameterType); } // POST parameters are treated as methodParameters that are JSON encoded else if (paramType == CallbackMethodParameterType.Json) //string newVariable = methodParameters.GetValue(parmCounter) as string; adjustedParms[parmCounter] = serializer.Deserialize(Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject( Request.Params["parm" + (parmCounter + 1).ToString()], parameter.ParameterType); } else if (paramType == CallbackMethodParameterType.Json) adjustedParms[parmCounter] = serializer.Deserialize(parameters[parmCounter], parameter.ParameterType); else adjustedParms[parmCounter] = SerializationUtils.DeSerializeObject(parameters[parmCounter], parameter.ParameterType); parmCounter++; } Result = MI.Invoke(target, adjustedParms); return Result; } The code basically uses Reflection to loop through all the parameters available on the method and tries to assign the parameters from RouteData, QueryString or POST variables. The parameters are converted into their appropriate types and then used to eventually make a Reflection based method call. What’s sweet is that the RouteData retrieval is just another option for dealing with the inbound data in this scenario and it adds exactly two lines of code plus the code to retrieve the MethodName I showed previously – a seriously low impact addition that adds a lot of extra value to this endpoint callback processing implementation. Debugging your Routes If you create a lot of routes it’s easy to run into Route conflicts where multiple routes have the same path and overlap with each other. This can be difficult to debug especially if you are using automatically generated routes like the routes created by CallbackHandlerRouteHandler.RegisterRoutes. Luckily there’s a tool that can help you out with this nicely. Phill Haack created a RouteDebugging tool you can download and add to your project. The easiest way to do this is to grab and add this to your project is to use NuGet (Add Library Package from your Project’s Reference Nodes):   which adds a RouteDebug assembly to your project. Once installed you can easily debug your routes with this simple line of code which needs to be installed at application startup:protected void Application_Start(object sender, EventArgs e) { CallbackHandlerRouteHandler.RegisterRoutes<StockService>(RouteTable.Routes); // Debug your routes RouteDebug.RouteDebugger.RewriteRoutesForTesting(RouteTable.Routes); } Any routed URL then displays something like this: The screen shows you your current route data and all the routes that are mapped along with a flag that displays which route was actually matched. This is useful – if you have any overlap of routes you will be able to see which routes are triggered – the first one in the sequence wins. This tool has saved my ass on a few occasions – and with NuGet now it’s easy to add it to your project in a few seconds and then remove it when you’re done. Routing Around Custom routing seems slightly complicated on first blush due to its disconnected components of RouteHandler, route registration and mapping of custom handlers. But once you understand the relationship between a RouteHandler, the RouteData and how to pass it to a handler, utilizing of Routing becomes a lot easier as you can easily pass context from the registration to the RouteHandler and through to the HttpHandler. The most important thing to understand when building custom routing solutions is to figure out how to map URLs in such a way that the handler can figure out all the pieces it needs to process the request. This can be via URL routing parameters and as I did in my example by passing additional context information as part of the RouteHandler instance that provides the proper execution context. In my case this ‘context’ was the method name, but it could be an actual static value like an enum identifying an operation or category in an application. Basically user supplied data comes in through the url and static application internal data can be passed via RouteHandler property values. Routing can make your application URLs easier to read by non-techie types regardless of whether you’re building Service type or REST applications, or full on Web interfaces. Routing in ASP.NET 4.0 makes it possible to create just about any extensionless URLs you can dream up and custom RouteHanmdler References Sample ProjectIncludes the sample CallbackHandler service discussed here along with compiled versionsof the Westwind.Web and Westwind.Utilities assemblies.  (requires .NET 4.0/VS 2010) West Wind Web Toolkit includes full implementation of CallbackHandler and the Routing Handler West Wind Web Toolkit Source CodeContains the full source code to the Westwind.Web and Westwind.Utilities assemblies usedin these samples. Includes the source described in the post.(Latest build in the Subversion Repository) CallbackHandler Source(Relevant code to this article tree in Westwind.Web assembly) JSONView FireFoxPluginA simple FireFox Plugin to easily view JSON data natively in FireFox.For IE you can use a registry hack to display JSON as raw text.© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  HTTP  

    Read the article

  • Embedding ADF UI Components into OAF regions

    - by Juan Camilo Ruiz
    Having finished the 2 Webcast on ADF integration with Oracle E-Business Suite, Sara Woodhull, Principal Product Manager on the Oracle E-Business Suite Applications Technology team and I are going to continue adding entries to the series on this topic, trying to cover as many use cases as possible. In this entry, Sara created an overview on how Oracle ADF pages can be embedded into an Oracle Application Framework region. This is a very interesting approach that will enable those of you who are exploring ADF as a technology stack to enhanced some of the Oracle E-Business Suite flows and leverage your skill on Oracle Applications Framework (OAF). In upcoming entries we will start unveiling the internals needed to achieve session sharing between the regions. Stay tuned for more entries and enjoy this new post.   Document Scope This document only covers information that is specific to embedding an Oracle ADF page in an Oracle Application Framework–based page. It assumes knowledge of Oracle ADF and Oracle Application Framework development. It also assumes knowledge of the material in My Oracle Support Note 974949.1, “Oracle E-Business Suite SDK for Java” and My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Prerequisite Patch Download Patch 12726556:R12.FND.B from My Oracle Support and install it. The implementation described below requires Patch 12726556:R12.FND.B to provide the accessors for the ADF page. This patch is required in addition to the Oracle E-Business Suite SDK for Java patch described in My Oracle Support Note 974949.1. Development Environments You need two different JDeveloper environments: Oracle ADF and OA Framework. Oracle ADF Development Environment You build your Oracle ADF page using JDeveloper 11g. You should use JDeveloper 11g R1 (the latest is 11.1.1.6.0) if you need to use other products in the Oracle Fusion Middleware Stack, such as Oracle WebCenter, Oracle SOA Suite, or BI. You should use JDeveloper 11g R2 (the latest is 11.1.2.3.0) if you do not need other Oracle Fusion Middleware products. JDeveloper 11g R2 is an Oracle ADF-specific release that supports the latest Java EE standards and has various core improvements. Oracle Application Framework Development Environment Build your OA Framework page using a development environment corresponding to your Oracle E-Business Suite version. You must use Release 12.1.2 or later because the rich content container was introduced in Release 12.1.2. See “OA Framework - How to find the correct version of JDeveloper to use with eBusiness Suite 11i or Release 12.x” (My Oracle Support Doc ID 416708.1). Building your Oracle ADF Page Typically you build your ADF page using the session management feature of the Oracle E-Business Suite SDK for Java as described in My Oracle Support Note 974949.1. Also see My Oracle Support Note 1296491.1, "FAQ for Integration of Oracle E-Business Suite and Oracle Application Development Framework (ADF) Applications". Building an ADF Page with the Hierarchy Viewer If you are using the ADF hierarchy viewer, you should set up the structure and settings of the ADF page as follows or the hierarchy viewer may not fill the entire area it is supposed to fill (especially a problem in Firefox). Create a stretchable component as the parent component for the hierarchy viewer, such as af:panelStretchLayout (underneath the af:form component in the structure). Use af:panelStretchLayout for Oracle ADF 11.1.1.6 and earlier. For later versions of Oracle ADF, use af:panelGridLayout. Create your hierarchy viewer component inside the stretchable component. Create Function in Oracle E-Business Suite Instance In your Oracle E-Business Suite instance, create a function for your ADF page with the following parameters. You can use either the Functions window in the System Administrator responsibility or the Functions page in the Functional Administrator responsibility. Function Function Name Type=External ADF Function (ADFX) HTML Call=GWY.jsp?targetPage=faces/<your ADF page> ">You must also add your function to an Oracle E-Business Suite menu or permission set and set up function security or role-based access control (RBAC) so that the user has authorization to access the function. If you do not want the function to appear on the navigation menu, add the function without a menu prompt. See the Oracle E-Business Suite System Administrator's Guide Documentation Set for more information. Testing the Function from the Oracle E-Business Suite Home Page It’s a good idea to test launching your ADF page from the Oracle E-Business Suite Home Page. Add your function to the navigation menu for your responsibility with a prompt and try launching it. If your ADF page expects parameters from the surrounding page, those might not be available, however. Setting up the Oracle Application Framework Rich Container Once you have built your Oracle ADF 11g page, you need to embed it in your Oracle Application Framework page. Create Rich Content Container in your OA Framework JDeveloper environment In the OA Extension Structure pane for your OAF page, select the region where you want to add the rich content, and add a richContainer item to the region. Set the following properties on the richContainer item: id Content Type=Others (for Release 12.1.3. This property value may change in a future release.) Destination Function=[function code] Width (in pixels or percent, such as 100%) Height (in pixels) Parameters=[any parameters your Oracle ADF page is expecting to receive from the Oracle Application Framework page] Parameters In the Parameters property, specify parameters that will be passed to the embedded content as a list of comma-separated, name-value pairs. Dynamic parameters may be specified as paramName={@viewAttr}. Dynamic Rich Content Container Properties If you want your rich content container to display a different Oracle ADF page depending on other information, you would set up a different function for each different Oracle ADF page. You would then set the Destination Function and Parameters properties programmatically, instead of setting them in the Property Inspector. In the processRequest() method of your Oracle Application Framework page controller, where OAFRichContentPage is the ID of your richContainer item and the parameters are whatever parameters your ADF page expects, your code might look similar to this code fragment: OARichContainerBean richBean = (OARichContainerBean) webBean.findChildRecursive("OAFRichContentPage"); if(richBean != null){ if(isFirstCondition){ richBean.setFunctionName("ADF_EXAMPLE_EMBEDDED"); richBean.setParameters("ParamLoginPersonId="+loginPersonId +"&ParamPersonId="+personId+"&ParamUserId="+userId +"&ParamRespId="+respId+"&ParamRespApplId="+respApplId +"&ParamFromOA=Y"+"&ParamSecurityGroupId="+securityGroupId); } else if(isSecondCondition){ richBean.setFunctionName("ADF_EXAMPLE_OTHER_FUNCTION"); richBean.setParameters("ParamLoginPersonId=" +loginPersonId+"&ParamPersonId="+personId +"&ParamUserId="+userId+"&ParamRespId="+respId +"&ParamRespApplId="+respApplId +"&ParamFromOA=Y" +"&ParamSecurityGroupId="+securityGroupId); } }

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >