Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 684/715 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • nconf nagios config no services defined

    - by user1508056
    I've setup Nagios core on OSX 10.7 server via macports fine. It seems to load fine and the sample config files all copied over to /opt/local/etc/nagios/objects/ fine and are specified correctly in the nagios.cfg file. I then installed nconf manually and got it running without much fight. Then I clicked on "Generate Nagios config" in nconf and get 1 warning and 4 errors. When I expand the error box here what I see: Nagios Core 3.5.0 Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors Copyright (c) 1999-2009 Ethan Galstad Last Modified: 03-15-2013 License: GPL Website: http://www.nagios.org Reading configuration data... Read main config file okay... Read object config files okay... Running pre-flight check on configuration data... Checking services... Error: There are no services defined! Checked 0 services. Checking hosts... Error: There are no hosts defined! Checked 0 hosts. Checking host groups... Checked 0 host groups. Checking service groups... Checked 0 service groups. Checking contacts... Error: There are no contacts defined! Checked 0 contacts. Checking contact groups... Checked 0 contact groups. Checking service escalations... Checked 0 service escalations. Checking service dependencies... Checked 0 service dependencies. Checking host escalations... Checked 0 host escalations. Checking host dependencies... Checked 0 host dependencies. Checking commands... Checked 0 commands. Checking time periods... Checked 0 time periods. Checking for circular paths between hosts... Checking for circular host and service dependencies... Checking global event handlers... Checking obsessive compulsive processor commands... Checking misc settings... Warning: Nothing specified for illegal_macro_output_chars variable! Total Warnings: 1 Total Errors: 3 I've tried several different things (played with cache settings, changed file permissions/ownership, edited some config files manually, etc.) but nothing gets me past this step. The thing is, when I run 'sudo nagios -v /opt/local/etc/nagios/nagios.cfg' the output shows it is reading a number of services, a localhost, and a contact in the .cfg files...so I'm pretty confident those are ok and the problem is nconf isnt reading the correct .cfg files or something like that. Any ideas what to double check? I did lots of googling and found nothing on this specific issue--so either I'm special (I'm not) or am overlooking something really simple. The path to nagios binary is listed as /opt/local/bin/nagios, if that matters. Also, all the nagios files are owned by nagios:nagios, wheras nconf files are owned by user, with only the directories/files specified in the nconf docs belonging to the _www user and/or group (things like output, temp, config, etc.). Thanks.

    Read the article

  • IPv6 host route is deleted after PMTU expires

    - by SAPikachu
    I am experimenting my new IPv6 tunnel setup between my local Ubuntu box and a scratch Linode. I set up some docker containers, configured 6in4 tunnel server and IPv6 forwarding on the Linode: # uname -a Linux argo 3.15.4-x86_64-linode45 #1 SMP Mon Jul 7 08:42:36 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 48: sit-sapikachu: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1472 qdisc noqueue state UNKNOWN group default link/sit 106.185.41.115 peer 1.2.3.4 inet6 fd00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::6ab9:2973/64 scope link valid_lft forever preferred_lft forever 13: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff inet 172.17.42.1/16 scope global docker0 valid_lft forever preferred_lft forever inet6 fc00::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5484:7aff:fefe:9799/64 scope link valid_lft forever preferred_lft forever // Docker containers are bridged to docker0 On my local box, I configured a 6in4 tunnel interface to connect to the Linode box, and added a host route to one of the docker container: # uname -a Linux sapikachu-netbox 3.13.0-24-generic #47-Ubuntu SMP Fri May 2 23:30:00 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux # ip addr .. snipped .. 16: sit-argo: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default link/sit 0.0.0.0 peer 106.185.41.115 inet6 fd00::2/64 scope global valid_lft forever preferred_lft forever inet6 fe80::a97:302/64 scope link valid_lft forever preferred_lft forever inet6 fe80::ac19:1/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1f0/64 scope link valid_lft forever preferred_lft forever inet6 fe80::c0a8:1fa/64 scope link valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether *** brd ff:ff:ff:ff:ff:ff .. snipped .. inet6 fd00:0:1::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::2e0:6fff:fe0e:365e/64 scope link valid_lft forever preferred_lft forever # ip route replace fc00::1875:8606:d8c1:8a9d via fd00::1 # Add route to docker container # ip -6 route .. snipped unrelated routes fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires 590sec mtu 1472 fd00::/64 dev sit-argo proto kernel metric 256 fd00:0:1::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sit-argo proto kernel metric 256 (Note that tunnel MTU on my local box is different from the server, this is intentional for testing) After adding the host route to the docker container (fc00::1875:8606:d8c1:8a9d), I can ping the container without problem until the route expires. After that I couldn't get reply any more. If I run ip -6 route in a few seconds after expiration, expiration time of the host route will be a negative number: fc00::1875:8606:d8c1:8a9d via fd00::1 dev sit-argo metric 1024 expires -1sec And output of ip route get fc00::1875:8606:d8c1:8a9d shows that it is routed to my default IPv6 gateway (which fails to route it correctly of course, since the address is not globally routable). After some time, the host route disappears without a trace. This problem won't happen if I do either one of the following things: Set MTU of tunnel on my local box to be the same as the server (1472). The route won't have expiration time in both ip -6 route and ip route get in this case. Instead of adding a host route, add a route with network mask (even /127 works). In this case ip -6 route shows the route without expiration time, ip route get shows expiration time but it will be correctly refreshed after expiration. Although this problem can be easily resolved, I am curious to know why this happens. Is there error in my configuration, or is this a kernel bug?

    Read the article

  • Samba with Active Directory - shares are readonly, NT_STATUS_MEDIA_WRITE_PROTECTED

    - by froh42
    I've set a samba server that seems to work, all shares are seemingly exported as readonly, however. The machine is called "lx". When I'm on lx I can run the following command: froh@lx:~$ smbclient //lx/export -UAdministrator Enter Administrator's password: Domain=[CUSTOMER] OS=[Unix] Server=[Samba 3.5.4] smb: \> mkdir wrzlbrmpf NT_STATUS_MEDIA_WRITE_PROTECTED making remote directory \wrzlbrmpf smb: \> ls . D 0 Fri Dec 3 19:04:20 2010 .. D 0 Sun Nov 28 01:32:37 2010 zork D 0 Fri Dec 3 18:53:33 2010 bar D 0 Sun Nov 28 23:52:43 2010 ork 1 Fri Dec 3 18:53:02 2010 foo 1 Sun Nov 28 23:52:41 2010 gaga D 0 Fri Dec 3 19:04:20 2010 How can I troubleshoot this? What I did: First I set up a fresh install of Ubuntu 10.10 x64. Second I got kerberos working with the following krb5.conf file: [libdefaults] ticket_lifetime = 24000 clock_skew = 300 default_realm = CUSTOMER.LOCAL [realms] CUSTOMER.LOCAL = { kdc = SB4.customer.local:88 admin_server = SB4.customer.local:464 default_domain = CUSTOMER.LOCAL } [domain_realm] .customer.local = CUSTOMER.LOCAL customer.local = CUSTOMER.LOCAL #[login] # krb4_convert = true # krb4_get_tickets = false I also added winbind to group, passwd and shadow in nsswitch.conf. Seemingly Kerberos works: root@lx:~# net ads testjoin Join is OK root@lx:~# wbinfo -a 'Administrator%MYSECRETPASSWORD' plaintext password authentication succeeded challenge/response password authentication succeeded wbinfo -u and wbinfo -g also spit out a list of users and a list of groups respectiveley. I noted that domain accounts did NOT include a domain and they are in german (as on the SBS 2003 that is the domain server). So I get a "Domänenbenutzer" in wbinfo -u's output not a "CUSTOMER+Domain User" or something similar. I'm not sure anymore what I did to the PAM configuration, but here is what I currently have: root@lx:/etc/pam.d# cat samba @include common-auth @include common-account @include common-session-noninteractive root@lx:/etc/pam.d# grep -ve '^#' common-auth auth [success=3 default=ignore] pam_krb5.so minimum_uid=1000 auth [success=2 default=ignore] pam_unix.so nullok_secure try_first_pass auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so root@lx:/etc/pam.d# grep -ve '^#' common-account account [success=2 new_authtok_reqd=done default=ignore] pam_unix.so account [success=1 new_authtok_reqd=done default=ignore] pam_winbind.so account requisite pam_deny.so account required pam_permit.so account required pam_krb5.so minimum_uid=1000 root@lx:/etc/pam.d# grep -ve '^#' common-session-noninteractive session [default=1] pam_permit.so session requisite pam_deny.so session required pam_permit.so session optional pam_krb5.so minimum_uid=1000 session required pam_unix.so session optional pam_winbind.so At some point I joined the linux box into the AD domain. After (manually) creating a home directory on the linux box I can log in using the Adminstrator user with the password taken from AD. Now I run samba with the following setup: [global] netbios name = LX realm = CUSTOMER.LOCAL workgroup = CUSTOMER security = ADS encrypt passwords = yes password server = 192.168.20.244 #IP des Domain Controllers os level = 0 socket options = TCP_NODELAY SO_RCVBUF=16384 SO_SNDBUF=16384 idmap uid = 10000-20000 idmap gid = 10000-20000 winbind enum users = Yes winbind enum groups = Yes preferred master = no winbind separator = + dns proxy = no wins proxy = no # client NTLMv2 auth = Yes log level = 2 logfile = /var/log/samba/log.smbd.%U template homedir = /home/%U template shell = /bin/bash [export] path = /mnt/sdc1/export read only = No public = Yes Currently I don't care whether export is exported to everyone or just one user, I want to see somebody WRITING to that directory before I start fiddling with the authentication settings. (Who may access it). As mentioned, accessing the share from smbclient results in this NT_STATUS_MEDIA_WRITE_PROTECTED . Accessing it from windows shows ACLs that look correct (The user may write) - but it does not work, I can only read files not write. The directory to be exported looks like this: root@lx:/etc/pam.d# ls -ld /mnt/ drwxr-xr-x 5 root root 4096 2010-11-28 01:29 /mnt/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/ drwxr-xr-x 4 froh froh 4096 2010-11-28 01:32 /mnt/sdc1/ root@lx:/etc/pam.d# ls -ld /mnt/sdc1/export/ drwxrwxrwx+ 5 administrator domänen-admins 4096 2010-12-03 19:04 /mnt/sdc1/export/ root@lx:/etc/pam.d# getfacl /mnt/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/ # owner: root # group: root user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/ # owner: froh # group: froh user::rwx group::r-x other::r-x root@lx:/etc/pam.d# getfacl /mnt/sdc1/export/ getfacl: Entferne führende '/' von absoluten Pfadnamen # file: mnt/sdc1/export/ # owner: administrator # group: domänen-admins user::rwx group::rwx group:domänen-admins:rwx mask::rwx other::rwx default:user::rwx default:group::rwx default:group:domänen-admins:rwx default:mask::rwx default:other::rwx My, oh my what am I overlooking? What am I to blind to see?

    Read the article

  • ovs-vsctl: "eth0" is not a valid UUID

    - by Przemek Lach
    I'm trying to setup an open v-switch inside my Ubuntu 12.04 Server VM. I have created three interfaces for this VM and I want to create a port mirror inside of the VM using these there interfaces and open v-switch. There are three Host-Only Adapters: eth0, eth1, eth2. The idea is that three other VM's will be connected to these adapters. One of these VM's will stream UDP video to eth0 and I want the vswitch'd VM to mirror those packets from eth0 onto eth1 and eth2. Each of the VM's connected to eth1 and eth2 will get the same video stream. I performed the following steps to install open v-switch: $ apt-get install python-simplejson python-qt4 python-twisted-conch automake autoconf gcc uml-utilities libtool build-essential $ apt-get install build-essential autoconf automake pkg-config $ wget http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ tar xf http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ cd http://openvswitch.org/releases/openvswitch-1.7.1.tar.gz $ apt-get install libssl-dev iproute tcpdump linux-headers-`uname -r` $ ./boot.sh $ ./configure - -with-linux=/lib/modules/`uname -r`/build $ make $ sudo make install After installation I configured as follows: $ insmod datapath/linux/openvswitch.ko $ sudo touch /usr/local/etc/ovs-vswitchd.conf $ mkdir -p /usr/local/etc/openvswitch $ ovsdb-tool create /usr/local/etc/openvswitch/conf.db Then I started the server: $ ovsdb-server /usr/local/etc/openvswitch/conf.db \ --remote=punix:/usr/local/var/run/openvswitch/db.sock \ --remote=db:Open_vSwitch,manager_options \ --private-key=db:SSL,private_key \ --certificate=db:SSL,certificate \ --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach --log-file $ ovs-vsctl –no-wait init (run only once) $ ovs-vswitchd --pidfile --detach The above steps I got from this tutorial and it all worked fine. I then proceeded to add a port mirror based on the open v-switch documentation under Port Mirroring. I successfully completed the following commands: $ ovs-vsctl add-br br0 $ ovs-vsctl add-port br0 eth0 $ ovs-vsctl add-port br0 eth1 $ ovs-vsctl add-port br0 eth2 $ ifconfig eth0 promisc up $ ifconfig eth1 promisc up $ ifconfig eth2 promisc up At this point when I run ovs-vsctl show I get the following: 75bda8c2-b870-438b-9115-e36288ea1cd8 Bridge "br0" Port "br0" Interface "br0" type: internal Port "eth0" Interface "eth0" Port "eth2" Interface "eth2" Port "eth1" Interface "eth1" And when I run ifconfig I get the following: eth0 Link encap:Ethernet HWaddr 08:00:27:9f:51:ca inet6 addr: fe80::a00:27ff:fe9f:51ca/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth1 Link encap:Ethernet HWaddr 08:00:27:53:02:d4 inet6 addr: fe80::a00:27ff:fe53:2d4/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth2 Link encap:Ethernet HWaddr 08:00:27:cb:a5:93 inet6 addr: fe80::a00:27ff:fecb:a593/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:17 errors:0 dropped:0 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1494 (1.4 KB) TX bytes:468 (468.0 B) eth3 Link encap:Ethernet HWaddr 08:00:27:df:bb:d8 inet addr:192.168.1.139 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedf:bbd8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2211 errors:0 dropped:0 overruns:0 frame:0 TX packets:1196 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182987 (182.9 KB) TX bytes:125441 (125.4 KB) NOTE: I use eth3 as a bridge adapter for SSH'ing into the VM. So now, I think I've done everything correctly but when I try to create the bridge using the following command: $ ovs-vsctl -- set Bridge br0 mirrors=@m -- --id=@eth0 get Port eth0 -- --id=@eth1 get Port eth1 -- --id=@m create Mirror name=app1Mirror select-dst-port=eth0 select-src-port=@eth0 output-port=@eth1,eth2 I get the following error: ovs-vsctl: "eth0" is not a valid UUID I don't understand why it's not able to find the interfaces?

    Read the article

  • Pushing DNSSEC updates with offline keys

    - by eggyal
    In a non-professional capacity, I look after the DNS of some 18 domains: mostly personal/vanity domains for immediate family. I outsource the whole shebang to an inexpensive managed hosting provider with a web interface through which I manage the zones; since the provider also offers DNSSEC, I have successfully deployed that too. These domains are so unimportant that an attack targetted against them seems much less likely than a general compromise of my provider's systems, at which point the records of all their customers might be changed to misdirect traffic (perhaps with extremely long TTLs). DNSSEC could protect against such an attack, but only if the zone's private keys are not held by the hosting provider. So, I wonder: how can one keep DNSSEC private keys offline yet still transfer signed zones to an outsourced DNS host? The most obvious answer (to me, at least) is to run one's own shadow/hidden master (from which the provider can slave) and then copy offline-signed zonefiles to the master as required. The problem is that the only machine I (want to*) control is my personal laptop, which usually connects from a typical home ADSL (behind NAT over a dynamically-assigned IP address). Having them slave from that (e.g. with a very long Expiry time on the zone for periods when my laptop is offline/unavailable) would not only require a Dynamic DNS record from which they can slave (if indeed they can slave from a named host rather than a static IP address), but would also involve me running a DNS server on my laptop and opening both it and my home network up to the incoming zone transfer requests: not ideal. I would prefer a much more push-oriented design, whereby my laptop initiates transfer of offline-signed zonefiles/updates to the provider's servers. I looked into whether nsupdate could fit the bill: documentation is a little sketchy, but my testing (with BIND 9.7) suggests it can indeed update DNSSEC zones, but only where the server holds the keys to perform the zone signing; I have not found a way to have it take an update including the relevant RRSIG/NSEC/etc. records and have the server accept them. Is this a supported use-case? If not, I suspect the only solutions which could fit the bill will involve non-DNS-based transfer of the zone updates and would welcome recommendations that are supported by (hopefully inexpensive) hosting providers: SFTP/SCP? rsync? RDBMS replication? Proprietary API? Finally, what would be the practical implications of such a setup? Key rotation is jumping out at me as being an obvious difficulty, especially if my laptop is offline for extended periods. But the zones are extremely stable, so perhaps I could get away with long-lived ZSKs**...? * Whilst I could run a shadow/hidden master on e.g. an outsourced VPS, I dislike the overhead of having to secure / manage / monitor / maintain yet another system; not to mention the additional financial costs of so doing. ** Okay, this would enable a concerted attacker to replay outdated records—but the risk and impact of such are both tolerable in the case of these domains.

    Read the article

  • Identifying the cause of my DNS failure (domain not propagating)

    - by thejartender
    I have set up a DNS server with the help of two helpful tutorials: http://linuxconfig.org/linux-dns-server-bind-configuration http://ulyssesonline.com/2007/11/07/how-to-setup-a-dns-server-in-ubuntu/ I am using: Ubuntu Bind9 and had issues I tried negating on my own thanks to a question I posted here earlier that pointed out my mistake of using rfc 1918 addresses in my previous SOA record: $TTL 3D @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 10.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 10.0.0.42 gw IN A 10.0.0.138 www IN CNAME thejarbar.org. $TTL 600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.10.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I read the ranges that are used under rfc 1918 and modified my routers resource pool to assign LAN devices IP(s) within the 30.0.0.0 range and now modified my SOA to: $TTL 600 @ IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); thejarbar.org. IN A 30.0.0.42 @ IN NS ns.thejarbar,org. yuccalaptop IN A 10.0.0.19 ns IN A 30.0.0.42 gw IN A 30.0.0.138 www IN CNAME thejarbar.org. $TTL600 0.0.10.in-addr.arpa. IN SOA ns.thejarbar.org. email. ( 13112012 28800 3600 604800 38400 ); 0.0.30.in-addr.arpa. IN NS ns.thejarbar.org. 42 IN PTR thejarbar.org. 19 IN PTR yuccalaptop.thejarbar.org. 138 IN PTR gw.thejarbar.org. I can ping my nameserverver ns.thejarbar.organd it gives me the correct isp IP address, but my domain never seems to propagate to my nameserver. I have searched for a concise tutorial that covers setting up a DNS with a nameserver that hosts (my) or the site. I am fully aware that this is not recommended and am using this for my learning purposes. Getting to the question, due to the lack of information in tutorials I looked at (nothing about rfc 1918 and no example of swapping these with ISP IP) is my router modification going to help me as it does not seem to be. I have also tried as recommended using my ISP IP instead of the values I posted. My site never propagated to my nameserver. What could be causing this? I have run dig thejarbar.org @88.89.190.171 and get an authorative response. Can anyone assist me with the final steps I may be missing here?

    Read the article

  • duplicate cache pages: Varnish

    - by Sukhjinder Singh
    Recently we have configured Varnish on our server, it was successfully setup but we noticed that if we open any page in multiple browsers, the Varnish send request to Apache not matter page is cached or not. If we refresh twice on each browser it creates duplicate copies of the same page. What exactly should happen: If any page is cached by Varnish, the subsequent request should be served from Varnish itself when we are opening the same page in browser OR we are opening that page from different IP address. Following is my default.vcl file backend default { .host = "127.0.0.1"; .port = "80"; } sub vcl_recv { if( req.url ~ "^/search/.*$") { }else { set req.url = regsub(req.url, "\?.*", ""); } if (req.restarts == 0) { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } if (!req.backend.healthy) { unset req.http.Cookie; } set req.grace = 6h; if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } if (req.http.Cookie) { set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { unset req.http.Cookie; } else { return (pass); } } if (req.request != "GET" && req.request != "HEAD" && req.request != "PUT" && req.request != "POST" && req.request != "TRACE" && req.request != "OPTIONS" && req.request != "DELETE") {return(pipe);} /* Non-RFC2616 or CONNECT which is weird. */ if (req.request != "GET" && req.request != "HEAD") { return (pass); } if (req.http.Accept-Encoding) { if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") { # No point in compressing these remove req.http.Accept-Encoding; } else if (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } else if (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { # unknown algorithm remove req.http.Accept-Encoding; } } return (lookup); } sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } sub vcl_fetch { if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } set beresp.grace = 6h; } sub vcl_hash { hash_data(req.url); if (req.http.host) { hash_data(req.http.host); } else { hash_data(server.ip); } return (hash); } sub vcl_pipe { set req.http.connection = "close"; } sub vcl_hit { if (req.request == "PURGE") {ban_url(req.url); error 200 "Purged";} if (!obj.ttl > 0s) {return(pass);} } sub vcl_miss { if (req.request == "PURGE") {error 200 "Not in cache";} }

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • How to use iptables to forward all data from an IP to a Virtual Machine

    - by jro
    OK, in an attempt to get some response, a TL;DR version. I know that the following command: iptables -A PREROUTING -t nat -i eth0 --dport 80 --source 1.1.1.1 -j REDIRECT --to-port 8080 ... will redirect all traffic from port 80 to port 8080. The problem is that I have to do this for every port that is to be redirected. To be future-proof, I want all ports for an IP to be redirected to a different (internal) IP, so that if one might decide to enable SSH, they can directly connect without worrying about iptables. What is needed to reliable forward all traffic from an external IP, to an internal IP, and vice versa? Extended version I've scoured the internet for this, but I never got a solid answer. What I have is one physical server (HOST), with several virtual machines (VM) that need traffic redirected to them. Just getting it to work with a single machine is enough for now. The VM's run under VirtualBox, and are set to use a host-only adapter (vboxnet0). Everything seems to work, but it is greatly lagging. Both the host (CentOS 5.6) and the guest (Ubuntu 10.04) machine are running Linux. What I did was the following: Configure the VM to have a static IP in the network of the vboxnet0 adapter. Add an IP alias to the host, registering to the dedicated (outside) IP. Setup iptables to allow traffic to come through (via sysctl). Configure iptables to DNAT and SNAT data from a given IP address to the internal address. iptables commands: sudo iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT sudo iptables -A POSTROUTING -t nat -j MASQUERADE iptables -t nat -I PREROUTING -d $OUT_IP -I eth0 -j DNAT --to-destination $IN_IP iptables -t nat -I POSTROUTING -s $IN_IP -o eth0 -j SNAT --to-source $OUT_IP Now the site works, but is really, really slow. I'm hoping I missed something simple, but I'm out of ideas for now. Some background info: before this, the site was working with basic port forwarding. E.g. port 80 was mapped to port 8080 using iptables. In VirtualBox (having the network adapter configured as NAT), a port forwarding the other way around made things work beautifully. The problem was twofold: first, multiple ports needed to be forwarded (for admin interfaces, https, ssh, etc). Second, it only allowed one IP address to use port 80. To resolve things, multiple external IP addresses are used for different (sub)domains. Likewise, the "VirtualBox" network will contain the virtual machines: DNS Ext. IP Adapter VM "VirtalBox" IP ------------------------------------------------------------------ a.example.com 1.1.1.1 eth0:1 vm_guest_1 192.168.56.1 b.example.com 2.2.2.2 eth0:2 vm_guest_2 192.168.56.2 c.example.com 3.3.3.3 eth0:3 vm_guest_3 192.168.56.3 And so on. Put simply, the goal is to channel all traffic from a.example.com to vm_guest_1 (of put differently, from 1.1.1.1 to 192.168.56.1). And achieve this with an acceptable speed :).

    Read the article

  • Connecting PC to TV via HDMI/DVI: Windows XP doesn't allow the appropriate screen resolution

    - by Jørgen
    I have a computer that is connected to the living room TV (a Panasonic) via HDMI. There is no other monitor connected. My problem is that the computer, which is running Windows XP, does not allow me to set the proper resolution for the TV. Both the graphics adapter and the TV should support the 1280x720 resolution, but it cannot be selected - the only available options are 1280x600 and 800x600, both in the "native" Windows dialog box and the custom Intel graphics options dialog box. Do anyone have a suggestion for a solution for this? Things I've thought of: Setting the resolution directly in the registry (where?) Installing some "custom" monitor driver (the TV manufacturer does not appear to provide any, currently the "generic" one is used) Details on the setup: Connection: DVI output on the computer via a passive DVI-HDMI adapter to the HDMI input on the TV, audio is run on a separate link, the TV is able to combine video and audio without any problem, the problem is there regardless of whether or not the audio is connected. The connection is several meters long through some walls, for this reason using a VGA cable instead is not an option. Note that the report explicitly says that the TV supports 1280x720. Still, I am not allowed to select it in Graphics Options, only 1280x600 and 800x600 is available. For 800x600, there's a lot of black around the edges; for 1280x600, the screen is "zoomed" so the edges of the monitor image (like the taskbar) is not visible. Other: The computer is running Windows XP. More recent versions of Windows are not an option (I have no licence). Linux is probably not an option (some of the video streaming sites I plan to use do not support it, I think) I wrote the rest of the details below. Thanks for any help!! TV: Panasonic TX-L32X10Y, European version; a 720p 32" quite "regular" LCD TV. Allowed resolutions according to manual: Signal name: 640x480 @60HZ Horizontal frequency: 31.47 kHz Vertical frequency: 60Hz Signal name: 750/720) /60p Horizontal frequency: 45.00 kHz Vertical frequency: 60Hz Signal name: 1,125 (1,080) / 60p Horizontal frequency: 67.50 kHz Vertical frequency: 60Hz (this is exactly how the manual presents it. PC via D-SUB (VGA cable) and "regular" HDMI have more alternatives.) Messing with the "zoom" settings on the TV does not affect the available resolution options on the computer. Computer: The following is a printout from one of the graphics adapter option pages. I think it covers most of it. The computer is a Dell. INTEL(R) EXTREME GRAPHICS 2 REPORT Report Date: 04/17/2011 Report Time[hr:mm:ss]: 20:18:02 Driver Version: 6.14.10.4396 Operating System: Windows XP* Professional, Service Pack 3 (5.1.2600) Default Language: English DirectX* Version: 9.0 Physical Memory: 1021 MB Minimum Graphics Memory: 1 MB Maximum Graphics Memory: 96 MB Graphics Memory in Use: 6 MB Processor: x86 Processor Speed: 2593 MHZ Vendor ID: 8086 Device ID: 2572 Device Revision: 02 * Accelerator Information * Accelerator in Use: Intel(R) 82865G Graphics Controller Video BIOS: 2972 Current Graphics Mode: 1280 by 600 True Color (60 Hz) * Devices Connected to the Graphics Accelerator * Active Digital Displays: 1 * Digital Display * Monitor Name: Plug and Play Monitor Display Type: Digital Gamma Value: 2.20 DDC2 Protocol: Supported Maximum Image Size: Horizontal: Not Available Vertical: Not Available Monitor Supported Modes: 1280 by 720 (50 Hz) 1280 by 720 (60 Hz) Display Power Management Support: Standby Mode: Not Supported Suspend Mode: Not Supported Active Off Mode: Not Supported (disclaimer: this question was also asked at the Wikipedia Reference Desk some time ago and might show up in a Google search. I got no useful answers there.)

    Read the article

  • Exchange 2003 mail non-delivery (NDR), spam activity? events 7002 & 7004

    - by HighTechGeek
    Windows Server 2003 Small Business Server SP2 Exchange Version 6.5 (Build 7638.2: Service Pack 2) This network has been neglected and has been having email problems for years and was on many blacklists. I was called in after the server eventually crashed... I got the server back up and running, but email problems persist. Outgoing mail delivery is sporadic. Sometimes the mail goes through, sometimes a delayed delivery report is generated after a day or more, and sometimes it seems to go through, but the recipient never receives it. Not sure if spammers are successfully using the server as a relay (see event entries below after turning on maximum SMTP logging)... User PCs infected with viruses and server was blacklisted on many sites (I used mxtoolbox.com) I have cleaned all the PCs and changed all passwords (including administrator) I have requested removal from all of the blacklists - most have removed the listing, some take more time. I have setup rDNS pointer records with the ISP (Comcast) - that was one reason for some of the blacklistings. I have tested that it's not an open relay using telnet as described here: www.amset.info/exchange/smtp-openrelay.asp I followed the advise of a Spamhaus & Microsoft article to enable maximum SMTP logging. http://www.spamhaus.org/faq/answers.lasso?section=isp%20spam%20issues#320 which directed me to Microsoft KB article 895853, specifically, the part 2/3 down titled: "If mail relay occurs from an account on an Exchange computer that is not configured as an open relay" . The Application Event Log is filling with this type of activity (Event ID 7002, 7002 & 3018 errors): Event Type: Error Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7004 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol error log for virtual server ID 1, connection #621. The remote host "212.52.84.180", responded to the SMTP command "rcpt" with "550 #5.1.0 Address rejected [email protected] ". The full command sent was "RCPT TO: ". This will probably cause the connection to fail. and this: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 7:33:29 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #620. The remote host "212.52.84.170", responded to the SMTP command "rcpt" with "452 Too many recipients received this hour ". The full command sent was "RCPT TO: ". This may cause the connection to fail. or a variant of: Event Type: Warning Event Source: MSExchangeTransport Event Category: SMTP Protocol Event ID: 7002 Date: 1/18/2011 Time: 8:39:21 AM User: N/A Computer: SERVER Description: This is an SMTP protocol warning log for virtual server ID 1, connection #661. The remote host "82.57.200.133", responded to the SMTP command "rcpt" with "421 Service not available - too busy ". The full command sent was "RCPT TO: ". This may cause the connection to fail. also Event Type: Error Event Source: MSExchangeTransport Event Category: NDR Event ID: 3018 Date: 1/18/2011 Time: 9:49:37 AM User: N/A Computer: SERVER Description: A non-delivery report with a status code of 5.4.0 was generated for recipient rfc822;[email protected] (Message-ID ). Causes: This message indicates a DNS problem or an IP address configuration problem Solution: Check the DNS using nslookup or dnsq. Verify the IP address is in IPv4 literal format. Data: 0000: ef 02 04 c0 ï..À Any guidance and/or suggestions and/or tests to perform would be greatly appreciated.

    Read the article

  • Why do I get a connection error / timeout when using python suds to connect to Microsoft CRM?

    - by Chris R
    When I try to connect to an MS CRM web service using suds/python-ntlm, I am getting a timeout on requests. However, the code that I'm trying to replace -- which calls out to the cURL command line app to do the same call -- succeeds. Clearly something is different in the way that cURL is sending the command data, but I'll be damned if I know what the difference is. Below are the full details of the various calls. Anyone got any tips? Here's the code that is making the request, followed by the output. The cURL command code is below that, and its response follows. Hosts, users, and passwords have been changed to protect the innocent, of course. wsdl_url = 'https://client.service.host/MSCrmServices/2007/MetadataService.asmx?WSDL' username = r'domain\user.name' password = 'userpass' from suds.transport.https import WindowsHttpAuthenticated from suds.client import Client import logging logging.basicConfig(level=logging.INFO) logging.getLogger('suds.client').setLevel(logging.DEBUG) logging.getLogger('suds.transport').setLevel(logging.DEBUG) ntlmTransport = WindowsHttpAuthenticated(username=username, password=password) metadata_client = Client(wsdl_url, transport=ntlmTransport) request = metadata_client.factory.create('RetrieveAttributeRequest') request.MetadataId = '00000000-0000-0000-0000-000000000000' request.EntityLogicalName = 'opportunity' request.LogicalName = 'new_typeofcontact' request.RetrieveAsIfPublished = 'false' attr = metadata_client.service.Execute(request) print attr Here's the output: DEBUG:suds.client:sending to (http://client.service.host/MSCrmServices/2007/MetadataService.asmx) message: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml'} DEBUG:suds.transport.http:sending: URL:http://client.service.host/MSCrmServices/2007/MetadataService.asmx HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml', 'Content-type': 'text/xml', 'Soapaction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"'} MESSAGE: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (16, 0)) --------------------------------------------------------------------------- URLError Traceback (most recent call last) /Users/crose/projects/2366/crm/<ipython console> in <module>() /var/folders/nb/nbJAzxR1HbOppPcs6xO+dE+++TY/-Tmp-/python-67186icm.py in <module>() 19 request.LogicalName = 'new_typeofcontact' 20 request.RetrieveAsIfPublished = 'false' 21 ---> 22 attr = metadata_client.service.Execute(request) 23 print attr /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in __call__(self, *args, **kwargs) 537 return (500, e) 538 else: --> 539 return client.invoke(args, kwargs) 540 541 def faults(self): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in invoke(self, args, kwargs) 596 self.method.name, timer) 597 timer.start() --> 598 result = self.send(msg) 599 timer.stop() 600 metrics.log.debug( /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in send(self, msg) 621 request = Request(location, str(msg)) 622 request.headers = self.headers() --> 623 reply = transport.send(request) 624 if retxml: 625 result = reply.message /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/https.pyc in send(self, request) 62 def send(self, request): 63 self.addcredentials(request) ---> 64 return HttpTransport.send(self, request) 65 66 def addcredentials(self, request): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in send(self, request) 75 request.headers.update(u2request.headers) 76 log.debug('sending:\n%s', request) ---> 77 fp = self.u2open(u2request) 78 self.getcookies(fp, u2request) 79 result = Reply(200, fp.headers.dict, fp.read()) /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in u2open(self, u2request) 116 return url.open(u2request) 117 else: --> 118 return url.open(u2request, timeout=tm) 119 120 def u2opener(self): /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in open(self, fullurl, data, timeout) 381 req = meth(req) 382 --> 383 response = self._open(req, data) 384 385 # post-process response /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _open(self, req, data) 399 protocol = req.get_type() 400 result = self._call_chain(self.handle_open, protocol, protocol + --> 401 '_open', req) 402 if result: 403 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _call_chain(self, chain, kind, meth_name, *args) 359 func = getattr(handler, meth_name) 360 --> 361 result = func(*args) 362 if result is not None: 363 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in http_open(self, req) 1128 1129 def http_open(self, req): -> 1130 return self.do_open(httplib.HTTPConnection, req) 1131 1132 http_request = AbstractHTTPHandler.do_request_ /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in do_open(self, http_class, req) 1103 r = h.getresponse() 1104 except socket.error, err: # XXX what error? -> 1105 raise URLError(err) 1106 1107 # Pick apart the HTTPResponse object to get the addinfourl URLError: <urlopen error [Errno 60] Operation timed out> The cURL command is: /opt/local/bin/curl --ntlm -u "domain\user.name:userpass" -k -d @- -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.1)" -H "Connection: Keep-Alive" -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction: http://schemas.microsoft.com/crm/2007/WebServices/Execute" https://client.service.host/MSCrmServices/2007/MetadataService.asmx The data that is piped to that cURL command: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Header> <CrmAuthenticationToken xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <AuthenticationType xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">0</AuthenticationType> <CrmTicket xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes"></CrmTicket> <OrganizationName xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">CMIFS</OrganizationName> <CallerId xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">00000000-0000-0000-0000-000000000000</CallerId> </CrmAuthenticationToken> </soap:Header> <soap:Body> <Execute xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Request xsi:type="RetrieveAttributeRequest"> <MetadataId>00000000-0000-0000-0000-000000000000</MetadataId> <EntityLogicalName>opportunity</EntityLogicalName> <LogicalName>new_typeofcontact</LogicalName> <RetrieveAsIfPublished>false</RetrieveAsIfPublished> </Request> </Execute> </soap:Body> </soap:Envelope> Here's the response: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ExecuteResponse xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Response xsi:type="RetrieveAttributeResponse"> <AttributeMetadata xsi:type="PicklistAttributeMetadata"> <MetadataId>101346cf-a6af-4eb4-a4bf-9c3c6bbd6582</MetadataId> <SchemaName>New_TypeofContact</SchemaName> <LogicalName>new_typeofcontact</LogicalName> <EntityLogicalName>opportunity</EntityLogicalName> <AttributeType> <Value>Picklist</Value> </AttributeType> <!-- stuff here --> </AttributeMetadata> </Response> </ExecuteResponse> </soap:Body> </soap:Envelope>

    Read the article

  • ant ftp doesn't download files in subdirectories

    - by Kristof Neirynck
    Hello, I'm trying to download files in subdirectories from an ftp server with ant. The exact set of files is known. Some of them are in subdirectories. Ant only seems to download the ones in the root directory. It does work if I download all files without listing them. The first ftp action should do the exact same thing as the second. Instead it complains about "Hidden files" and seems to prefix the paths with "\\". Does anyone know what's wrong here? Is this a bug in commons-net? build.xml <?xml version="1.0" encoding="utf-8"?> <project name="example" default="example" basedir="."> <taskdef name="ftp" classname="org.apache.tools.ant.taskdefs.optional.net.FTP" /> <target name="example"> <!-- 2 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="root1.txt,root2.txt,a/a.txt,a/b/ab.txt,c/c.txt" /> </ftp> <!-- 5 files retrieved --> <ftp action="get" verbose="true" server="localhost" userid="example" password="example"> <fileset dir="downloads" casesensitive="false" includes="**/*" /> </ftp> </target> </project> run_ant.bat @ECHO OFF PUSHD %~dp0 SET CLASSPATH= SET ANT_HOME=C:\apache-ant-1.8.0 SET ant=%ANT_HOME%\bin\ant.bat SET antoptions=-nouserlib -noclasspath -d SET ftpjars=^ -lib lib\jakarta-oro-2.0.8.jar ^ -lib lib\commons-net-2.0.jar CALL %ant% %antoptions% %ftpjars% > output.txt POPD output.txt Unable to locate tools.jar. Expected to find it in C:\PROGRA~1\Java\jre6\lib\tools.jar Apache Ant version 1.8.0 compiled on February 1 2010 Trying the default build file: build.xml Buildfile: G:\ftp\build.xml Adding reference: ant.PropertyHelper Detected Java version: 1.6 in: C:\PROGRA~1\Java\jre6 Detected OS: Windows 7 Adding reference: ant.ComponentHelper Setting ro project property: ant.file -> G:\ftp\build.xml Setting ro project property: ant.file.type -> file Adding reference: ant.projectHelper Adding reference: ant.parsing.context Adding reference: ant.targets parsing buildfile G:\ftp\build.xml with URI = file:/G:/ftp/build.xml Setting ro project property: ant.project.name -> example Adding reference: example Setting ro project property: ant.project.default-target -> example Setting ro project property: ant.file.example -> G:\ftp\build.xml Setting ro project property: ant.file.type.example -> file Project base dir set to: G:\ftp +Target: +Target: example Adding reference: ant.LocalProperties parsing buildfile jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml with URI = jar:file:/C:/apache-ant-1.8.0/lib/ant.jar!/org/apache/tools/ant/antlib.xml from a zip file Class org.apache.tools.ant.taskdefs.optional.net.FTP loaded from parent loader (parentFirst) Setting ro project property: ant.project.invoked-targets -> example Attempting to create object of type org.apache.tools.ant.helper.DefaultExecutor Adding reference: ant.executor Build sequence for target(s) `example' is [example] Complete build sequence is [example, ] example: [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [root1.txt, root2.txt, a/a.txt, a/b/ab.txt] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\b\ assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files [ftp] Hidden file \\a\a.txt assumed to not be a symlink. filelist map used in listing files filelist map used in listing files filelist map used in listing files [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 2 files retrieved [ftp] disconnecting [ftp] Opening FTP connection to localhost [ftp] connected [ftp] logging in to FTP server [ftp] login succeeded [ftp] getting files fileset: Setup scanner in dir G:\ftp\downloads with patternSet{ includes: [**/*] excludes: [] } will try to cd to A where a directory called a exists testing case sensitivity, attempting to cd to A remote system is case sensitive : false [ftp] transferring a\a.txt to G:\ftp\downloads\a\a.txt [ftp] File G:\ftp\downloads\a\a.txt copied from localhost [ftp] transferring a\b\ab.txt to G:\ftp\downloads\a\b\ab.txt [ftp] File G:\ftp\downloads\a\b\ab.txt copied from localhost [ftp] transferring c\c.txt to G:\ftp\downloads\c\c.txt [ftp] File G:\ftp\downloads\c\c.txt copied from localhost [ftp] transferring root1.txt to G:\ftp\downloads\root1.txt [ftp] File G:\ftp\downloads\root1.txt copied from localhost [ftp] transferring root2.txt to G:\ftp\downloads\root2.txt [ftp] File G:\ftp\downloads\root2.txt copied from localhost [ftp] 5 files retrieved [ftp] disconnecting BUILD SUCCESSFUL Total time: 0 seconds server log (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000153) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,232 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a/b (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,233 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,234 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //\\a\b\ (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a/b" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD //a (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/a" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,235 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,236 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root1.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,237 (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> RETR root2.txt (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> QUIT (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> 221 Goodbye (000153) 7/05/2010 19:46:12 - example (127.0.0.1)> disconnected. (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> Connected, sending welcome message... (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-FileZilla Server version 0.9.34 beta (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220-written by Tim Kosse ([email protected]) (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 220 Please visit http://sourceforge.net/projects/filezilla/ (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> USER example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> 331 Password required for example (000154) 7/05/2010 19:46:12 - (not logged in) (127.0.0.1)> PASS ******* (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 230 Logged on (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> TYPE I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Type set to I (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> SYST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 215 UNIX emulated by FileZilla (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,239 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD A (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/A" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PWD (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 257 "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> PORT 127,0,0,1,207,240 (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:12 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD a (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,241 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD //a/ (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD b (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/a/b" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,242 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/a" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD c (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/c" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,243 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> LIST (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for directory list. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CDUP (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 CDUP successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> CWD / (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 250 CWD successful. "/" is current directory. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,244 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/a.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,245 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR a/b/ab.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,246 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR c/c.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,247 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root1.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> PORT 127,0,0,1,207,248 (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 200 Port command successful (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> RETR root2.txt (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 150 Opening data channel for file transfer. (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 226 Transfer OK (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> QUIT (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> 221 Goodbye (000154) 7/05/2010 19:46:13 - example (127.0.0.1)> disconnected.

    Read the article

  • socket operation on nonsocket or bad file descriptor

    - by Magn3s1um
    I'm writing a pthread server which takes requests from clients and sends them back a bunch of .ppm files. Everything seems to go well, but sometimes when I have just 1 client connected, when trying to read from the file descriptor (for the file), it says Bad file Descriptor. This doesn't make sense, since my int fd isn't -1, and the file most certainly exists. Other times, I get this "Socket operation on nonsocket" error. This is weird because other times, it doesn't give me this error and everything works fine. When trying to connect multiple clients, for some reason, it will only send correctly to one, and then the other client gets the bad file descriptor or "nonsocket" error, even though both threads are processing the same messages and do the same routines. Anyone have an idea why? Here's the code that is giving me that error: while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); The messages for both threads are the same, being of the form ./path/imageXX.ppm where XX is the number that should go to the client. The file size of each image is 58368 bytes. Sometimes, this code hangs on the read, and stops execution. I don't know this would be either, because the file descriptor comes back as valid. Thanks in advanced. Edit: Here's some sample output: Sending to client a: ./support/images/sw90.ppm This is fd 4 Error: : Socket operation on non-socket Sending to client a: ./support/images/sw91.ppm This is fd 4 Error: : Socket operation on non-socket Sending ./support/images/sw92.ppm This is fd 4 I am hhere2 Error: : Socket operation on non-socket My dispatcher has defeated evil Sample with 2 clients (client b was serviced first) Sending to client b: ./support/images/sw87.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw88.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw89.ppm This is fd 6 Error: : Success This is fd 6 Error: : Bad file descriptor Sending to client a: ./support/images/sw85.ppm This is fd 6 Error: As you can see, who ever is serviced first in this instance can open the files, but not the 2nd person. Edit2: Full code. Sorry, its pretty long and terribly formatted. #include <netinet/in.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <sys/types.h> #include <sys/socket.h> #include <errno.h> #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include "ring.h" /* Version 1 Here is what is implemented so far: The threads are created from the arguments specified (number of threads that is) The server will lock and update variables based on how many clients are in the system and such. The socket that is opened when a new client connects, must be passed to the threads. To do this, we need some sort of global array. I did this by specifying an int client and main_pool_busy, and two pointers poolsockets and nonpoolsockets. My thinking on this was that when a new client enters the system, the server thread increments the variable client. When a thread is finished with this client (after it sends it the data), the thread will decrement client and close the socket. HTTP servers act this way sometimes (they terminate the socket as soon as one transmission is sent). *Note down at bottom After the server portion increments the client counter, we must open up a new socket (denoted by new_sd) and get this value to the appropriate thread. To do this, I created global array poolsockets, which will hold all the socket descriptors for our pooled threads. The server portion gets the new socket descriptor, and places the value in the first spot of the array that has a 0. We only place a value in this array IF: 1. The variable main_pool_busy < worknum (If we have more clients in the system than in our pool, it doesn't mean we should always create a new thread. At the end of this, the server signals on the condition variable clientin that a new client has arrived. In our pooled thread, we then must walk this array and check the array until we hit our first non-zero value. This is the socket we will give to that thread. The thread then changes the array to have a zero here. What if our all threads in our pool our busy? If this is the case, then we will know it because our threads in this pool will increment main_pool_busy by one when they are working on a request and decrement it when they are done. If main_pool_busy >= worknum, then we must dynamically create a new thread. Then, we must realloc the size of our nonpoolsockets array by 1 int. We then add the new socket descriptor to our pool. Here's what we need to figure out: NOTE* Each worker should generate 100 messages which specify the worker thread ID, client socket descriptor and a copy of the client message. Additionally, each message should include a message number, starting from 0 and incrementing for each subsequent message sent to the same client. I don't know how to keep track of how many messages were to the same client. Maybe we shouldn't close the socket descriptor, but rather keep an array of structs for each socket that includes how many messages they have been sent. Then, the server adds the struct, the threads remove it, then the threads add it back once they've serviced one request (unless the count is 100). ------------------------------------------------------------- CHANGES Version 1 ---------- NONE: this is the first version. */ #define MAXSLOTS 30 #define dis_m 15 //problems with dis_m ==1 //Function prototypes void inc_clients(); void init_mutex_stuff(pthread_t*, pthread_t*); void *threadpool(void *); void server(int); void add_to_socket_pool(int); void inc_busy(); void dec_busy(); void *dispatcher(); void create_message(long, int, int, char *, char *); void init_ring(); void add_to_ring(char *, char *, int, int, int); int socket_from_string(char *); void add_to_head(char *); void add_to_tail(char *); struct message * reorder(struct message *, struct message *, int); int get_threadid(char *); void delete_socket_messages(int); struct message * merge(struct message *, struct message *, int); int get_request(char *, char *, char*); ///////////////////// //Global mutexes and condition variables pthread_mutex_t startservice; pthread_mutex_t numclients; pthread_mutex_t pool_sockets; pthread_mutex_t nonpool_sockets; pthread_mutex_t m_pool_busy; pthread_mutex_t slots; pthread_mutex_t numm; pthread_mutex_t circ; pthread_cond_t clientin; pthread_cond_t m; /////////////////////////////////////// //Global variables int clients; int main_pool_busy; int * poolsockets, nonpoolsockets; int worknum; struct ring mqueue; /////////////////////////////////////// int main(int argc, char ** argv){ //error handling if not enough arguments to program if(argc != 3){ printf("Not enough arguments to server: ./server portnum NumThreadsinPool\n"); _exit(-1); } //Convert arguments from strings to integer values int port = atoi(argv[1]); worknum = atoi(argv[2]); //Start server portion server(port); } /////////////////////////////////////////////////////////////////////////////////////////////// //The listen server thread///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////// void server(int port){ int sd, new_sd; struct sockaddr_in name, cli_name; int sock_opt_val = 1; int cli_len; pthread_t threads[worknum]; //create our pthread id array pthread_t dis[1]; //create our dispatcher array (necessary to create thread) init_mutex_stuff(threads, dis); //initialize mutexes and stuff //Server setup /////////////////////////////////////////////////////// if ((sd = socket (AF_INET, SOCK_STREAM, 0)) < 0) { perror("(servConn): socket() error"); _exit (-1); } if (setsockopt (sd, SOL_SOCKET, SO_REUSEADDR, (char *) &sock_opt_val, sizeof(sock_opt_val)) < 0) { perror ("(servConn): Failed to set SO_REUSEADDR on INET socket"); _exit (-1); } name.sin_family = AF_INET; name.sin_port = htons (port); name.sin_addr.s_addr = htonl(INADDR_ANY); if (bind (sd, (struct sockaddr *)&name, sizeof(name)) < 0) { perror ("(servConn): bind() error"); _exit (-1); } listen (sd, 5); //End of server Setup ////////////////////////////////////////////////// for (;;) { cli_len = sizeof (cli_name); new_sd = accept (sd, (struct sockaddr *) &cli_name, &cli_len); printf ("Assigning new socket descriptor: %d\n", new_sd); inc_clients(); //New client has come in, increment clients add_to_socket_pool(new_sd); //Add client to the pool of sockets if (new_sd < 0) { perror ("(servConn): accept() error"); _exit (-1); } } pthread_exit(NULL); //Quit } //Adds the new socket to the array designated for pthreads in the pool void add_to_socket_pool(int socket){ pthread_mutex_lock(&m_pool_busy); //Lock so that we can check main_pool_busy int i; //If not all our main pool is busy, then allocate to one of them if(main_pool_busy < worknum){ pthread_mutex_unlock(&m_pool_busy); //unlock busy, we no longer need to hold it pthread_mutex_lock(&pool_sockets); //Lock the socket pool array so that we can edit it without worry for(i = 0; i < worknum; i++){ //Find a poolsocket that is -1; then we should put the real socket there. This value will be changed back to -1 when the thread grabs the sockfd if(poolsockets[i] == -1){ poolsockets[i] = socket; pthread_mutex_unlock(&pool_sockets); //unlock our pool array, we don't need it anymore inc_busy(); //Incrememnt busy (locks the mutex itself) pthread_cond_signal(&clientin); //Signal first thread waiting on a client that a client needs to be serviced break; } } } else{ //Dynamic thread creation goes here pthread_mutex_unlock(&m_pool_busy); } } //Increments the client number. If client number goes over worknum, we must dynamically create new pthreads void inc_clients(){ pthread_mutex_lock(&numclients); clients++; pthread_mutex_unlock(&numclients); } //Increments busy void inc_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy++; pthread_mutex_unlock(&m_pool_busy); } //Initialize all of our mutexes at the beginning and create our pthreads void init_mutex_stuff(pthread_t * threads, pthread_t * dis){ pthread_mutex_init(&startservice, NULL); pthread_mutex_init(&numclients, NULL); pthread_mutex_init(&pool_sockets, NULL); pthread_mutex_init(&nonpool_sockets, NULL); pthread_mutex_init(&m_pool_busy, NULL); pthread_mutex_init(&circ, NULL); pthread_cond_init (&clientin, NULL); main_pool_busy = 0; poolsockets = malloc(sizeof(int)*worknum); int threadreturn; //error checking variables long i = 0; //Loop and create pthreads for(i; i < worknum; i++){ threadreturn = pthread_create(&threads[i], NULL, threadpool, (void *) i); poolsockets[i] = -1; if(threadreturn){ perror("Thread pool created unsuccessfully"); _exit(-1); } } pthread_create(&dis[0], NULL, dispatcher, NULL); } ////////////////////////////////////////////////////////////////////////////////////////// /////////Main pool routines ///////////////////////////////////////////////////////////////////////////////////////// void dec_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy--; pthread_mutex_unlock(&m_pool_busy); } void dec_clients(){ pthread_mutex_lock(&numclients); clients--; pthread_mutex_unlock(&numclients); } //This is what our threadpool pthreads will be running. void *threadpool(void * threadid){ long id = (long) threadid; //Id of this thread int i; int socket; int counter = 0; //Try and gain access to the next client that comes in and wait until server signals that a client as arrived while(1){ pthread_mutex_lock(&startservice); //lock start service (required for cond wait) pthread_cond_wait(&clientin, &startservice); //wait for signal from server that client exists pthread_mutex_unlock(&startservice); //unlock mutex. pthread_mutex_lock(&pool_sockets); //Lock the pool socket so we can get the socket fd unhindered/interrupted for(i = 0; i < worknum; i++){ if(poolsockets[i] != -1){ socket = poolsockets[i]; poolsockets[i] = -1; pthread_mutex_unlock(&pool_sockets); } } printf("Thread #%d is past getting the socket\n", id); int incoming = 1; while(counter < 100 && incoming != 0){ char buffer[512]; bzero(buffer,512); int startcounter = 0; incoming = read(socket, buffer, 512); if(buffer[0] != 0){ //client ID:priority:request:arguments char id[100]; long prior; char request[100]; char arg1[100]; char message[100]; char arg2[100]; char * point; point = strtok(buffer, ":"); strcpy(id, point); point = strtok(NULL, ":"); prior = atoi(point); point = strtok(NULL, ":"); strcpy(request, point); point = strtok(NULL, ":"); strcpy(arg1, point); point = strtok(NULL, ":"); if(point != NULL){ strcpy(arg2, point); } int fd; if(strcmp(request, "start_movie") == 0){ int count = 1; while(count <= 100){ char temp[10]; snprintf(temp, 50, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s to %s\n", message, id); count++; add_to_ring(message, id, prior, counter, socket); //Adds our created message to the ring counter++; } printf("I'm out of the loop\n"); } else if(strcmp(request, "seek_movie") == 0){ int count = atoi(arg2); while(count <= 100){ char temp[10]; snprintf(temp, 10, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s\n", message); count++; } } //create_message(id, socket, counter, buffer, message); //Creates our message from the input from the client. Stores it in buffer } else{ delete_socket_messages(socket); break; } } counter = 0; close(socket);//Zero out counter again } dec_clients(); //client serviced, decrement clients dec_busy(); //thread finished, decrement busy } //Creates a message void create_message(long threadid, int socket, int counter, char * buffer, char * message){ snprintf(message, strlen(buffer)+15, "%d:%d:%d:%s", threadid, socket, counter, buffer); } //Gets the socket from the message string (maybe I should just pass in the socket to another method) int socket_from_string(char * message){ char * substr1 = strstr(message, ":"); char * substr2 = substr1; substr2++; int occurance = strcspn(substr2, ":"); char sock[10]; strncpy(sock, substr2, occurance); return atoi(sock); } //Adds message to our ring buffer's head void add_to_head(char * message){ printf("Adding to head of ring\n"); mqueue.head->message = malloc(strlen(message)+1); //Allocate space for message strcpy(mqueue.head->message, message); //copy bytes into allocated space } //Adds our message to our ring buffer's tail void add_to_tail(char * message){ printf("Adding to tail of ring\n"); mqueue.tail->message = malloc(strlen(message)+1); //allocate space for message strcpy(mqueue.tail->message, message); //copy bytes into allocated space mqueue.tail->next = malloc(sizeof(struct message)); //allocate space for the next message struct } //Adds a message to our ring void add_to_ring(char * message, char * id, int prior, int mnum, int socket){ //printf("This is message %s:" , message); pthread_mutex_lock(&circ); //Lock the ring buffer pthread_mutex_lock(&numm); //Lock the message count (will need this to make sure we can't fill the buffer over the max slots) if(mqueue.head->message == NULL){ add_to_head(message); //Adds it to head mqueue.head->socket = socket; //Set message socket mqueue.head->priority = prior; //Set its priority (thread id) mqueue.head->mnum = mnum; //Set its message number (used for sorting) mqueue.head->id = malloc(sizeof(id)); strcpy(mqueue.head->id, id); } else if(mqueue.tail->message == NULL){ //This is the problem for dis_m 1 I'm pretty sure add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } else{ mqueue.tail->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.tail->next; add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } mqueue.mcount++; pthread_mutex_unlock(&circ); if(mqueue.mcount >= dis_m){ pthread_mutex_unlock(&numm); pthread_cond_signal(&m); } else{ pthread_mutex_unlock(&numm); } printf("out of add to ring\n"); fflush(stdout); } ////////////////////////////////// //Dispatcher routines ///////////////////////////////// void *dispatcher(){ init_ring(); while(1){ pthread_mutex_lock(&slots); pthread_cond_wait(&m, &slots); pthread_mutex_lock(&numm); pthread_mutex_lock(&circ); printf("Dispatcher to the rescue!\n"); mqueue.head = reorder(mqueue.head, mqueue.tail, mqueue.mcount); //printf("This is the head %s\n", mqueue.head->message); //printf("This is the tail %s\n", mqueue.head->message); fflush(stdout); struct message * pointer = mqueue.head; int count = 0; while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); printf("My dispatcher has defeated evil\n"); } } void init_ring(){ mqueue.head = malloc(sizeof(struct message)); mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.mcount = 0; } struct message * reorder(struct message * begin, struct message * end, int num){ //printf("I am reordering for size %d\n", num); fflush(stdout); int i; if(num == 1){ //printf("Begin: %s\n", begin->message); begin->next = NULL; return begin; } else{ struct message * left = begin; struct message * right; int middle = num/2; for(i = 1; i < middle; i++){ left = left->next; } right = left -> next; left -> next = NULL; //printf("Begin: %s\nLeft: %s\nright: %s\nend:%s\n", begin->message, left->message, right->message, end->message); left = reorder(begin, left, middle); if(num%2 != 0){ right = reorder(right, end, middle+1); } else{ right = reorder(right, end, middle); } return merge(left, right, num); } } struct message * merge(struct message * left, struct message * right, int num){ //printf("I am merginging! left: %s %d, right: %s %dnum: %d\n", left->message,left->priority, right->message, right->priority, num); struct message * start, * point; int lenL= 0; int lenR = 0; int flagL = 0; int flagR = 0; int count = 0; int middle1 = num/2; int middle2; if(num%2 != 0){ middle2 = middle1+1; } else{ middle2 = middle1; } while(lenL < middle1 && lenR < middle2){ count++; //printf("In here for count %d\n", count); if(lenL == 0 && lenR == 0){ if(left->priority < right->priority){ start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ start = right; point = right; right = right->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ ////printf("This is where we are\n"); start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ start = right; point = right; right = right->next; point->next = NULL; lenR++; } } } else{ if(left->priority < right->priority){ point->next = left; left = left->next; //move the left pointer point = point->next; point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ point->next = left; //set our enum; left = left->next; point = point->next;//move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } } } if(lenL == middle1){ flagL = 1; break; } if(lenR == middle2){ flagR = 1; break; } } if(flagL == 1){ point->next = right; point = point->next; for(lenR; lenR< middle2-1; lenR++){ point = point->next; } point->next = NULL; mqueue.tail = point; } else{ point->next = left; point = point->next; for(lenL; lenL< middle1-1; lenL++){ point = point->next; } point->next = NULL; mqueue.tail = point; } //printf("This is the start %s\n", start->message); //printf("This is mqueue.tail %s\n", mqueue.tail->message); return start; } void delete_socket_messages(int a){ }

    Read the article

  • Getting Selected Dropdown content to show in a form-generated email

    - by fmz
    I have a small contact form: <form method="post" action="contact.php" name="contactform" id="contactform"> <fieldset> <legend>Please fill in the following form to contact us</legend> <label for="name"><span class="required">*</span> Your Name</label> <input name="name" type="text" id="name" size="30" value="" /> <br /> <label for="company"><span class="required">*</span> Company</label> <input name="company" type="text" id="name" size="30" value="" /> <br /> <label for="email"><span class="required">*</span> Email</label> <input name="email" type="text" id="email" size="30" value="" /> <br /> <label for="phone"><span class="required">*</span> Phone</label> <input name="phone" type="text" id="phone" size="30" value="" /> <br /> <label for="purpose"><span class="required">*</span> Purpose</label> <select id="purpose" style="width: 300px; height:35px;"> <option value="I am interested in your services">I am interested in your services!</option> <option value="I am interested in a partnership">I am interested in a partnership!</option> <option value="I am interested in a job">I am interested in a job!</option> </select> <br /> <label for=comments><span class="required">*</span> Comments</label> <textarea name="comments" cols="40" rows="3" id="comments" style="width: 350px;"></textarea> <p><span class="required">*</span> Please help us control spam.</p> <label for=verify accesskey=V>&nbsp;&nbsp;&nbsp;3 + 1 =</label> <input name="verify" type="text" id="verify" size="4" value="" style="width: 30px;" /><br /><br /> <input type="submit" class="submit" id="submit" value="Submit" /> </fieldset> </form> I want to send the results of the form in a php generated email. Everything is coming through except the selected contents of the "purpose" drop down. Here is the PHP: <?php if(!$_POST) exit; $name = $_POST['name']; $company = $_POST['company']; $email = $_POST['email']; $phone = $_POST['phone']; $purpose = $_POST['purpose']; $comments = $_POST['comments']; $verify = $_POST['verify']; if(trim($name) == '') { echo '<div class="error_message">Attention! You must enter your name.</div>'; exit(); } else if(trim($company) == '') { echo '<div class="error_message">Attention! Please enter your company name.</div>'; exit(); } else if(trim($email) == '') { echo '<div class="error_message">Attention! Please enter a valid email address.</div>'; exit(); } else if(trim($phone) == '') { echo '<div class="error_message">Attention! Please enter a valid phone number.</div>'; exit(); } else if(!isEmail($email)) { echo '<div class="error_message">Attention! You have enter an invalid e-mail address, try again.</div>'; exit(); } if(trim($comments) == '') { echo '<div class="error_message">Attention! Please enter your message.</div>'; exit(); } else if(trim($verify) == '') { echo '<div class="error_message">Attention! Please enter the verification number.</div>'; exit(); } else if(trim($verify) != '4') { echo '<div class="error_message">Attention! The verification number you entered is incorrect.</div>'; exit(); } if($error == '') { if(get_magic_quotes_gpc()) { $comments = stripslashes($comments); } // Configuration option. // Enter the email address that you want to emails to be sent to. // Example $address = "[email protected]"; $address = "[email protected]"; // Configuration option. // i.e. The standard subject will appear as, "You've been contacted by John Doe." // Example, $e_subject = '$name . ' has contacted you via Your Website.'; $e_subject = 'You\'ve been contacted by ' . $name . '.'; // Configuration option. // You can change this if you feel that you need to. // Developers, you may wish to add more fields to the form, in which case you must be sure to add them here. $e_body = "You have been contacted by $name.\r\n\n"; $e_content = "Comments: \"$comments\"\r\n\n"; $e_company = "Company: $company\r\n\n"; $e_purpose = "Reason for contact: $purpose\r\n"; $e_reply = "You can contact $name via email, $email or via phone $phone"; $msg = $e_body . $e_content . $e_company . $e_purpose . $e_reply; if(mail($address, $e_subject, $msg, "From: $email\r\nReply-To: $email\r\nReturn-Path: $email\r\n")) { // Email has sent successfully, echo a success page. echo "<fieldset>"; echo "<div id='success_page'>"; echo "<h1>Email Sent Successfully.</h1>"; echo "<p>Thank you <strong>$name</strong>, your message has been submitted to us.</p>"; echo "</div>"; echo "</fieldset>"; } else { echo 'ERROR!'; } } function isEmail($email) { // Email address verification, do not edit. return(preg_match("/^[-_.[:alnum:]]+@((([[:alnum:]]|[[:alnum:]][[:alnum:]-]*[[:alnum:]])\.)+(ad|ae|aero|af|ag|ai|al|am|an|ao|aq|ar|arpa|as|at|au|aw|az|ba|bb|bd|be|bf|bg|bh|bi|biz|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|com|coop|cr|cs|cu|cv|cx|cy|cz|de|dj|dk|dm|do|dz|ec|edu|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gh|gi|gl|gm|gn|gov|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|in|info|int|io|iq|ir|is|it|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|mg|mh|mil|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|museum|mv|mw|mx|my|mz|na|name|nc|ne|net|nf|ng|ni|nl|no|np|nr|nt|nu|nz|om|org|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|pro|ps|pt|pw|py|qa|re|ro|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|sk|sl|sm|sn|so|sr|st|su|sv|sy|sz|tc|td|tf|tg|th|tj|tk|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|um|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)$|(([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5])\.){3}([0-9][0-9]?|[0-1][0-9][0-9]|[2][0-4][0-9]|[2][5][0-5]))$/i",$email)); } ?> What am I missing? Thanks.

    Read the article

  • How To Start Your Own Professional Blog with WordPress

    - by Matthew Guay
    Would you like to start your own blog or website?  With a free WordPress  account, it’s free and easy to get started creating your own professional quality blog site. This is the first part in a series on how to create your own professional quality blog site. No, we’re not talking about some cheapo looking blog from Blogger or something on Facebook, but creating a quality blog you can be proud of and present to millions of readers online. WordPress is one of the most popular blogging platforms, powering hundreds of high-profile websites and blogs around the world.  It’s both powerful and easy to use, which makes it great whether you’re just starting out or are a blogging pro.  To start out with your blogging project WordPress is completely free, and you can use the online interface or install the WordPress software on your own server and blog from there. Getting Started You can start a blog in just a few minutes.  Head over to WordPress.com and click Sign up now on the right-hand side of the main page. Enter a username and password, check that you agree with the legal terms, select the “Gimme a blog” bullet, and click Next. WordPress may inform you that your username is already taken, simply choose a new one and try again. Next, choose a domain for your blog.  This will be the address for your site, and cannot be changed, so be sure to choose exactly what you want.  If you’d prefer your address to be yourname.com instead of yourname.wordpress.com, you can add your own domain for a fee after your blog is setup…but we’ll cover that later. Once you click signup, you will be sent a confirmation email.  While you wait for the email to arrive you can go ahead and enter in your name and a short bio about yourself. When you receive your confirmation email, click the link.  Congratulations; you now have your own blog! You can view your new blog immediately, though the default theme isn’t very interesting without your content and pictures. Back on the page you opened from the email, click Login to access your blog’s administration page and to start adding stuff to your blog.  You can also access your blog’s admin page anytime by from yourname.wordpress.com/admin, substituting your own blog name for yourname. Enter your username and password, then click Log in to get started. Adding Content to your WordPress.com Blog When you sign in to your WordPress blog, you’ll first see the WordPress Admin page.  Here you can see recent posts and comments, and you can see stats of how many people have visited your site.  You can also access all of your blog tools and settings right from this page. To add a new post to your blog, click the Posts link on the left, then click “Add New” either on the left menu or on the top of the Edit Posts page.  Or, if you want to edit the default first post, hover over it and select Edit. Or click the New Posts button on the top of the page.  This menu bar is always visible whenever you’re logged in, so it’s an easy way to add a post. The editor lets you easily write anything you want in a Microsoft Word-style editor.  You can format your text, add lists, links, quotes, and more.  When you’re ready to share your content with the world, click Publish on the right side. To add pictures or other files, click the picture icon beside “Upload/Insert”.  Your free blog account can store up to 3Gb of pictures and documents which will definitely give you a good start. Click Select Files, and then choose the pictures or documents you want to add to your post. When the pictures have uploaded, you can add a caption and choose how to position the picture.  When you’re finished, select “Insert into Post”.   Or, if you want to add a video, click the video button.  You have to add a paid upgrade to upload videos directly, but you can add YouTube and other online videos for free. Click the “From URL” tab, and then paste the link to the YouTube video and click Insert into post. If you’re a code geek, click the HTML tab in the editor and edit the HTML of your blog post the geeky way. Once you’ve added all your content and edited it the way you want, click the Publish button on the right of the editor.  Or, you can click Preview to make sure it looks right, and then click Publish. Here’s our blog with the new blog post containing a picture and video.  While you’re getting to know you’re way around the controls in WordPress, the Preview feature will be your best friend while you try to organize the content to your liking.   Conclusion It only takes a couple minutes to get started blogging at WordPress.com. Whether you want to write about your daily life, share pictures of your children, or review the latest books and gadgets, WordPress.com is a great place to get started for free.  But we’ve only covered a small portion of the WordPress features…but this should get you started. Check back for more WordPress and blogging coverage coming up soon! Links Signup for a free WordPress.com account Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareProtecting Your WordPress Admin Panel From Hackers With .htaccessMake a Backup Copy of your Production Wordpress Blog on UbuntuLinux QuickTip: Downloading and Un-tarring in One Step TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook Recycle !

    Read the article

  • WiX 3 Tutorial: Custom EULA License and MSI localization

    - by Mladen Prajdic
    In this part of the ongoing Wix tutorial series we’ll take a look at how to localize your MSI into different languages. We’re still the mighty SuperForm: Program that takes care of all your label color needs. :) Localizing the MSI With WiX 3.0 localizing an MSI is pretty much a simple and straightforward process. First let look at the WiX project Properties->Build. There you can see "Cultures to build" textbox. Put specific cultures to build into the testbox or leave it empty to build all of them. Cultures have to be in correct culture format like en-US, en-GB or de-DE. Next we have to tell WiX which cultures we actually have in our project. Take a look at the first post in the series about Solution/Project structure and look at the Lang directory in the project structure picture. There we have de-de and en-us subfolders each with its own localized stuff. In the subfolders pay attention to the WXL files Loc_de-de.wxl and Loc_en-us.wxl. Each one has a <String Id="LANG"> under the WixLocalization root node. By including the string with id LANG we tell WiX we want that culture built. For English we have <String Id="LANG">1033</String>, for German <String Id="LANG">1031</String> in Loc_de-de.wxl and for French we’d have to create another file Loc_fr-FR.wxl and put <String Id="LANG">1036</String>. WXL files are localization files. Any string we want to localize we have to put in there. To reference it we use loc keyword like this: !(loc.IdOfTheVariable) => !(loc.MustCloseSuperForm) This is our Loc_en-us.wxl. Note that German wxl has an identical structure but values are in German. <?xml version="1.0" encoding="utf-8"?><WixLocalization Culture="en-us" xmlns="http://schemas.microsoft.com/wix/2006/localization" Codepage="1252"> <String Id="LANG">1033</String> <String Id="ProductName">SuperForm</String> <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String> <String Id="ManufacturerName">My Company Name</String> <String Id="AppNotSupported">This application is is not supported on your current OS. Minimal OS supported is Windows XP SP2</String> <String Id="DotNetFrameworkNeeded">.NET Framework 3.5 is required. Please install the .NET Framework then run this installer again.</String> <String Id="MustCloseSuperForm">Must close SuperForm!</String> <String Id="SuperFormNewerVersionInstalled">A newer version of !(loc.ProductName) is already installed.</String> <String Id="ProductKeyCheckDialog_Title">!(loc.ProductName) setup</String> <String Id="ProductKeyCheckDialogControls_Title">!(loc.ProductName) Product check</String> <String Id="ProductKeyCheckDialogControls_Description">Plese Enter following information to perform the licence check.</String> <String Id="ProductKeyCheckDialogControls_FullName">Full Name:</String> <String Id="ProductKeyCheckDialogControls_Organization">Organization:</String> <String Id="ProductKeyCheckDialogControls_ProductKey">Product Key:</String> <String Id="ProductKeyCheckDialogControls_InvalidProductKey">The product key you entered is invalid. Please call user support.</String> </WixLocalization>   As you can see from the file we can use localization variables in other variables like we do for SuperFormNewerVersionInstalled string. ProductKeyCheckDialog* strings are to localize a custom dialog for Product key check which we’ll look at in the next post. Built in dialog text localization Under the de-de folder there’s also the WixUI_de-de.wxl file. This files contains German translations of all texts that are in WiX built in dialogs. It can be downloaded from WiX 3.0.5419.0 Source Forge site. Download the wix3-sources.zip and go to \src\ext\UIExtension\wixlib. There you’ll find already translated all WiX texts in 12 Languages. Localizing the custom EULA license Here it gets ugly. We can override the default EULA license easily by overriding WixUILicenseRtf WiX variable like this: <WixVariable Id="WixUILicenseRtf" Value="License.rtf" /> where License.rtf is the name of your custom EULA license file. The downside of this method is that you can only have one license file which means no localization for it. That’s why we need to make a workaround. License is checked on a dialog name LicenseAgreementDialog. What we have to do is overwrite that dialog and insert the functionality for localization. This is a code for LicenseAgreementDialogOverwritten.wxs, an overwritten LicenseAgreementDialog that supports localization. LicenseAcceptedOverwritten replaces the LicenseAccepted built in variable. <?xml version="1.0" encoding="UTF-8" ?><Wix xmlns="http://schemas.microsoft.com/wix/2006/wi"> <Fragment> <UI> <Dialog Id="LicenseAgreementDialogOverwritten" Width="370" Height="270" Title="!(loc.LicenseAgreementDlg_Title)"> <Control Id="LicenseAcceptedOverwrittenCheckBox" Type="CheckBox" X="20" Y="207" Width="330" Height="18" CheckBoxValue="1" Property="LicenseAcceptedOverwritten" Text="!(loc.LicenseAgreementDlgLicenseAcceptedCheckBox)" /> <Control Id="Back" Type="PushButton" X="180" Y="243" Width="56" Height="17" Text="!(loc.WixUIBack)" /> <Control Id="Next" Type="PushButton" X="236" Y="243" Width="56" Height="17" Default="yes" Text="!(loc.WixUINext)"> <Publish Event="SpawnWaitDialog" Value="WaitForCostingDlg">CostingComplete = 1</Publish> <Condition Action="disable"> <![CDATA[ LicenseAcceptedOverwritten <> "1" ]]> </Condition> <Condition Action="enable">LicenseAcceptedOverwritten = "1"</Condition> </Control> <Control Id="Cancel" Type="PushButton" X="304" Y="243" Width="56" Height="17" Cancel="yes" Text="!(loc.WixUICancel)"> <Publish Event="SpawnDialog" Value="CancelDlg">1</Publish> </Control> <Control Id="BannerBitmap" Type="Bitmap" X="0" Y="0" Width="370" Height="44" TabSkip="no" Text="!(loc.LicenseAgreementDlgBannerBitmap)" /> <Control Id="LicenseText" Type="ScrollableText" X="20" Y="60" Width="330" Height="140" Sunken="yes" TabSkip="no"> <!-- This is original line --> <!--<Text SourceFile="!(wix.WixUILicenseRtf=$(var.LicenseRtf))" />--> <!-- To enable EULA localization we change it to this --> <Text SourceFile="$(var.ProjectDir)\!(loc.LicenseRtf)" /> <!-- In each of localization files (wxl) put line like this: <String Id="LicenseRtf" Overridable="yes">\Lang\en-us\EULA_en-us.rtf</String>--> </Control> <Control Id="Print" Type="PushButton" X="112" Y="243" Width="56" Height="17" Text="!(loc.WixUIPrint)"> <Publish Event="DoAction" Value="WixUIPrintEula">1</Publish> </Control> <Control Id="BannerLine" Type="Line" X="0" Y="44" Width="370" Height="0" /> <Control Id="BottomLine" Type="Line" X="0" Y="234" Width="370" Height="0" /> <Control Id="Description" Type="Text" X="25" Y="23" Width="340" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgDescription)" /> <Control Id="Title" Type="Text" X="15" Y="6" Width="200" Height="15" Transparent="yes" NoPrefix="yes" Text="!(loc.LicenseAgreementDlgTitle)" /> </Dialog> </UI> </Fragment></Wix>   Look at the Control with Id "LicenseText” and read the comments. We’ve changed the original license text source to "$(var.ProjectDir)\!(loc.LicenseRtf)". var.ProjectDir is the directory of the project file. The !(loc.LicenseRtf) is where the magic happens. Scroll up and take a look at the wxl localization file example. We have the LicenseRtf declared there and it’s been made overridable so developers can change it if they want. The value of the LicenseRtf is the path to our localized EULA relative to the WiX project directory. With little hacking we’ve achieved a fully localizable installer package.   The final step is to insert the extended LicenseAgreementDialogOverwritten license dialog into the installer GUI chain. This is how it’s done under the <UI> node of course.   <UI> <!-- code to be discussed in later posts –> <!-- BEGIN UI LOGIC FOR CLEAN INSTALLER --> <Publish Dialog="WelcomeDlg" Control="Next" Event="NewDialog" Value="LicenseAgreementDialogOverwritten">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Back" Event="NewDialog" Value="WelcomeDlg">1</Publish> <Publish Dialog="LicenseAgreementDialogOverwritten" Control="Next" Event="NewDialog" Value="ProductKeyCheckDialog">LicenseAcceptedOverwritten = "1" AND NOT OLDER_VERSION_FOUND</Publish> <Publish Dialog="InstallDirDlg" Control="Back" Event="NewDialog" Value="ProductKeyCheckDialog">1</Publish> <!-- END UI LOGIC FOR CLEAN INSTALLER –> <!-- code to be discussed in later posts --></UI> For a thing that should be simple for the end developer to do, localization can be a bit advanced for the novice WiXer. Hope this post makes the journey easier and that next versions of WiX improve this process. WiX 3 tutorial by Mladen Prajdic navigation WiX 3 Tutorial: Solution/Project structure and Dev resources WiX 3 Tutorial: Understanding main wxs and wxi file WiX 3 Tutorial: Generating file/directory fragments with Heat.exe  WiX 3 Tutorial: Custom EULA License and MSI localization WiX 3 Tutorial: Product Key Check custom action WiX 3 Tutorial: Building an updater WiX 3 Tutorial: Icons and installer pictures WiX 3 Tutorial: Creating a Bootstrapper

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • Windows 7 Phone Database – Querying with Views and Filters

    - by SeanMcAlinden
    I’ve just added a feature to Rapid Repository to greatly improve how the Windows 7 Phone Database is queried for performance (This is in the trunk not in Release V1.0). The main concept behind it is to create a View Model class which would have only the minimum data you need for a page. This View Model is then stored and retrieved rather than the whole list of entities. Another feature of the views is that they can be pre-filtered to even further improve performance when querying. You can download the source from the Microsoft Codeplex site http://rapidrepository.codeplex.com/. Setting up a view Lets say you have an entity that stores lots of data about a game result for example: GameScore entity public class GameScore : IRapidEntity {     public Guid Id { get; set; }     public string GamerId {get;set;}     public string Name { get; set; }     public Double Score { get; set; }     public Byte[] ThumbnailAvatar { get; set; }     public DateTime DateAdded { get; set; } }   On your page you want to display a list of scores but you only want to display the score and the date added, you create a View Model for displaying just those properties. GameScoreView public class GameScoreView : IRapidView {     public Guid Id { get; set; }     public Double Score { get; set; }     public DateTime DateAdded { get; set; } }   Now you have the view model, the first thing to do is set up the view at application start up. This is done using the following syntax. View Setup public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score }); } As you can see, using a little bit of lambda syntax, you put in the code for constructing a single view, this is used internally for mapping an entity to a view. *Note* you do not need to map the Id property, this is done automatically, a view model id will always be the same as it’s corresponding entity.   Adding Filters One of the cool features of the view is that you can add filters to limit the amount of data stored in the view, this will dramatically improve performance. You can add multiple filters using the fluent syntax if required. In this example, lets say that you will only ever show the scores for the last 10 days, you could add a filter like the following: Add single filter public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10)); } If you wanted to further limit the data, you could also say only scores above 100: Add multiple filters public MainPage() {     RapidRepository<GameScore>.AddView<GameScoreView>(x => new GameScoreView { DateAdded = x.DateAdded, Score = x.Score })         .AddFilter(x => x.DateAdded > DateTime.Now.AddDays(-10))         .AddFilter(x => x.Score > 100); }   Querying the view model So the important part is how to query the data. This is done using the repository, there is a method called Query which accepts the type of view as a generic parameter (you can have multiple View Model types per entity type) You can either use the result of the query method directly or perform further querying on the result is required. Querying the View public void DisplayScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> scores = repository.Query<GameScoreView>();       // display logic } Further Filtering public void TodaysScores() {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     List<GameScoreView> todaysScores = repository.Query<GameScoreView>().Where(x => x.DateAdded > DateTime.Now.AddDays(-1)).ToList();       // display logic }   Retrieving the actual entity Retrieving the actual entity can be done easily by using the GetById method on the repository. Say for example you allow the user to click on a specific score to get further information, you can use the Id populated in the returned View Model GameScoreView and use it directly on the repository to retrieve the full entity. Get Full Entity public void GetFullEntity(Guid gameScoreViewId) {     RapidRepository<GameScore> repository = new RapidRepository<GameScore>();     GameScore fullEntity = repository.GetById(gameScoreViewId);       // display logic } Synchronising The View If you are upgrading from Rapid Repository V1.0 and are likely to have data in the repository already, you will need to perform a synchronisation to ensure the views and entities are fully in sync. You can either do this as a one off during the application upgrade or if you are a little more cautious, you could run this at each application start up. Synchronise the view public void MyUpgradeTasks() {     RapidRepository<GameScore>.SynchroniseView<GameScoreView>(); } It’s worth noting that in normal operation, the view keeps itself in sync with the entities so this is only really required if you are upgrading from V1.0 to V2.0 when it gets released shortly.   Summary I really hope you like this feature, it will be great for performance and I believe supports good practice by promoting the use of View Models for specific pages. I’m hoping to produce a beta for this over the next few days, I just want to add some more tests and hopefully iron out any bugs. I would really appreciate any thoughts on this feature and would really love to know of any bugs you find. You can download the source from the following : http://rapidrepository.codeplex.com/ Kind Regards, Sean McAlinden.

    Read the article

  • Week in Geek: LastPass Rescues Xmarks Edition

    - by Asian Angel
    This week we learned how to breathe new life into an aging Windows Mobile 6.x device, use filters in Photoshop, backup and move VirtualBox machines, use the BitDefender Rescue CD to clean an infected PC, and had fun setting up a pirates theme on our computers. Photo by _nash. Weekly Feature Do you love using the Faenza icon set on your Ubuntu system but feel that there are a few much needed icons missing (or you desire a different version of a particular icon)? Then you may want to take a look at the Faenza Variants icon pack. The icons are available in the following sizes: 16px, 22px, 32px, 48px and scalable sizes. Photo by Asian Angel. Faenza Variants Random Geek Links Another week with extra link goodness to help keep you on top of the news. Photo by Asian Angel. LastPass acquires Xmarks, premium service announced Xmarks announced that it has been acquired by LastPass, a cross-platform password management service. This also means that Xmarks is now in transition from a “free” to a “freemium” business model. WikiLeaks reappears on European Net domains WikiLeaks has re-emerged on a Swiss Internet domain followed by domains in Germany, Finland, and the Netherlands, sidestepping a move that had in effect taken the controversial site off the Internet. Iran: Yes, Stuxnet hurt our nuclear program The Stuxnet worm got some big play from Iranian President Mahmoud Ahmadinejad, who acknowledged that the malware dinged his nuclear program. More Windows Rogues than Just AV – Fake Defragmenter Check Disk Don’t think for a second that rogues are limited to scareware, because as so-called products such as “System Defragmenter”, “Scan Disk” “Check Disk” prove, they’re not. Internet Explorer’s Protected Mode can be bypassed Researchers from Verizon Business have now described a way of bypassing Protected Mode in IE 7 and 8 in order to gain access to user accounts. Can you really see who viewed your Facebook profile? Rogue application spreads virally Once again, a rogue application is spreading virally between Facebook users pretending to offer you a way of seeing who has viewed your profile. More holes in Palm’s WebOS Researchers Orlando Barrera and Daniel Herrera, who both work for security firm SecTheory, have discovered a gaping security hole in Palm’s WebOS smartphone operating system. Next-gen banking Trojans hit APAC With the proliferation of banking Trojans, Web and smartphone users of online banking services have to be on constant alert to avoid falling prey to fraud schemes, warned Etay Maor, project manager for RSA Fraud Action. AVG update cripples 64-bit computers A signature update automatically deployed by the AVG virus scanner Thursday has crippled numerous computers. Article includes link to forums to fix computers affected after a restart. Congress moves to outlaw ‘mystery charges’ for Web shoppers Legislation that makes it illegal for Web merchants and so-called post-transaction marketers to charge credit cards without the card owners’ say-so came closer to becoming law this week. Ballmer Set to “Look Into” Windows Home Server Drive Extender Fiasco Tuesday’s announcement from Microsoft regarding the removal of Drive Extender from Windows Home Server has sent shock waves across the web. Google tweaks search recipe to ding scam artists Google has changed its search algorithm to penalize sites deemed to provide an “extremely poor user experience” following a New York Times story on a merchant who justified abusive behavior towards customers as a search-engine optimization tactic. Geek Video of the Week Watch as our two friends debate back and forth about the early adoption of new technology through multiple time periods (Stone Age to the far future). Will our reluctant friend finally succumb to the temptation? Photo by CollegeHumor. Early Adopters Through History Random TinyHacker Links Fix Issues in Windows 7 Using Reliability Monitor Learn how to analyze Windows 7 errors and then fix them using the built-in reliability monitor. Learn About IE Tab Groups Tab groups is a useful feature in IE 8. Here’s a detailed guide to what it is all about. Google’s Book Helps You Learn About Browsers and Web A cool new online book by the Google Chrome team on browsers and the web. TrustPort Internet Security 2011 – Good Security from a Less Known Provider TrustPort is not exactly a well-known provider of security solutions. At least not in the consumer space. This review tests in detail their latest offering. How the World is Using Cell phones An infographic showing the shocking demographics of cell phone use. Super User Questions See the great answers to these questions from Super User. I am unable to access my C drive. It says it is unable to display current owner. List of Windows special directories/shortcuts like ‘%TEMP%’ Is using multiple passes for wiping a disk really necessary? How can I view two files side by side in Notepad++ Is there any tool that automatically puts screenshots to my Dropbox? How-To Geek Weekly Article Recap Look through our hottest articles from this past week at How-To Geek. How to Create a Software RAID Array in Windows 7 9 Alternatives for Windows Home Server’s Drive Extender Why Doesn’t Disk Cleanup Delete Everything from the Temp Folder? Ask the Readers: How Much Do You Customize Your Operating System? How to Upload Really Large Files to SkyDrive, Dropbox, or Email One Year Ago on How-To Geek Enjoy reading through these awesome articles from one year ago. How To Upgrade from Vista to Windows 7 Home Premium Edition How To Fix No Aero Transparency in Windows 7 Troubleshoot Startup Problems with Startup Repair Tool in Windows 7 & Vista Rename the Guest Account in Windows 7 for Enhanced Security Disable Error Reporting in XP, Vista, and Windows 7 The Geek Note That wraps things up here for this week. Regardless of the weather wherever you may be, we hope that you have an opportunity to get outside and have some fun! Remember to keep sending those great tips in to us at [email protected]. Photo by Tony the Misfit. Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • I have finally traded my Blackberry in for a Droid!

    - by Bob Porter
    Over the years I have used a number of different types of phones. Windows Mobile, Blackberry, Nokia, and now Android. Until the Blackberry, which was my last phone (and I still have one issued from my office) I had never found a phone that “just worked” especially with email and messaging. The Blackberry did, and does, excel at those functions. My last personal phone was a Storm 1 which was Blackberry’s first touch screen phone. The Storm 2 was an improved version that fixed some screen press detection issues from the first model and it added Wifi. Over the last few years I have watched others acquire and fall in love with their ‘Droid’s including a number of iPhone users which surprised me. Our office has until recently only supported Blackberry phones, adding iPhones within the last year or so. When I spoke with our internal telecom folks they confirmed they were evaluating Android phones, but felt they still were not secure enough out of the box for corporate use and SOX compliance. That being said, as a personal phone, the Droid Rocks! I am impressed with its speed, the number of apps available, and the overall design. It is not as “flashy” as an iPhone but it does everything that I care about and more. The model I bought is the Motorola Droid 2 Global from Verizon. It is currently running Android 2.2 for it’s OS, 2.3 is just around the corner. It has 8 gigs of internal flash memory and can handle up to a 32 gig SDCard. (I currently have 2 8 gig cards, one for backups, and have ordered a 16 gig card!) Being a geek at heart, I “rooted” the phone which means gained superuser access to the OS on the phone. And opens a number of doors for further modifications down the road. Also being a geek meant I have already setup a development environment and built and deployed the obligatory “Hello Droid” application. I will be writing of my development experiences with this new platform here often, to start off I thought I would share my current application list to give you an idea what I am using. Zedge: http://market.android.com/details?id=net.zedge.android XDA: http://market.android.com/details?id=com.quoord.tapatalkxda.activity WRAL.com: http://market.android.com/details?id=com.mylocaltv.wral Wireless Tether: http://market.android.com/details?id=android.tether Winamp: http://market.android.com/details?id=com.nullsoft.winamp Win7 Clock: http://market.android.com/details?id=com.androidapps.widget.toggles.win7 Wifi Analyzer: http://market.android.com/details?id=com.farproc.wifi.analyzer WeatherBug: http://market.android.com/details?id=com.aws.android Weather Widget Forecast Addon: http://market.android.com/details?id=com.androidapps.weather.forecastaddon Weather & Toggle Widgets: http://market.android.com/details?id=com.androidapps.widget.weather2 Vlingo: http://market.android.com/details?id=com.vlingo.client VirtualTENHO-G: http://market.android.com/details?id=jp.bustercurry.virtualtenho_g Twitter: http://market.android.com/details?id=com.twitter.android TweetDeck: http://market.android.com/details?id=com.thedeck.android.app Tricorder: http://market.android.com/details?id=org.hermit.tricorder Titanium Backup PRO: http://market.android.com/details?id=com.keramidas.TitaniumBackupPro Titanium Backup: http://market.android.com/details?id=com.keramidas.TitaniumBackup Terminal Emulator: http://market.android.com/details?id=jackpal.androidterm Talking Tom Free: http://market.android.com/details?id=com.outfit7.talkingtom Stock Blue: http://market.android.com/details?id=org.adw.theme.stockblue ST: Red Alert Free: http://market.android.com/details?id=com.oldplanets.redalertwallpaper ST: Red Alert: http://market.android.com/details?id=com.oldplanets.redalertwallpaperplus Solitaire: http://market.android.com/details?id=com.kmagic.solitaire Skype: http://market.android.com/details?id=com.skype.raider Silent Time Lite: http://market.android.com/details?id=com.QuiteHypnotic.SilentTime ShopSavvy: http://market.android.com/details?id=com.biggu.shopsavvy Shopper: http://market.android.com/details?id=com.google.android.apps.shopper Shiny clock: http://market.android.com/details?id=com.androidapps.clock.shiny ShareMyApps: http://market.android.com/details?id=com.mattlary.shareMyApps Sense Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.senseglassadwtheme ROM Manager: http://market.android.com/details?id=com.koushikdutta.rommanager Roboform Bookmarklet Installer: http://market.android.com/details?id=roboformBookmarkletInstaller.android.com RealCalc: http://market.android.com/details?id=uk.co.nickfines.RealCalc Package Buddy: http://market.android.com/details?id=com.psyrus.packagebuddy Overstock: http://market.android.com/details?id=com.overstock OMGPOP Toggle: http://market.android.com/details?id=com.androidapps.widget.toggle.omgpop OI File Manager: http://market.android.com/details?id=org.openintents.filemanager nook: http://market.android.com/details?id=bn.ereader MyAtlas-Google Maps Navigation ext: http://market.android.com/details?id=com.adaptdroid.navbookfree3 MSN Droid: http://market.android.com/details?id=msn.droid.im Matrix Live Wallpaper: http://market.android.com/details?id=com.jarodyv.livewallpaper.matrix LogMeIn: http://market.android.com/details?id=com.logmein.ignitionpro.android Liveshare: http://market.android.com/details?id=com.cooliris.app.liveshare Kobo: http://market.android.com/details?id=com.kobobooks.android Instant Heart Rate: http://market.android.com/details?id=si.modula.android.instantheartrate IMDb: http://market.android.com/details?id=com.imdb.mobile Home Plus Weather: http://market.android.com/details?id=com.androidapps.widget.skin.weather.homeplus Handcent SMS: http://market.android.com/details?id=com.handcent.nextsms H7C Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skin.h7c GTasks: http://market.android.com/details?id=org.dayup.gtask GPS Status: http://market.android.com/details?id=com.eclipsim.gpsstatus2 Google Voice: http://market.android.com/details?id=com.google.android.apps.googlevoice Google Sky Map: http://market.android.com/details?id=com.google.android.stardroid Google Reader: http://market.android.com/details?id=com.google.android.apps.reader GoMarks: http://market.android.com/details?id=com.androappsdev.gomarks Goggles: http://market.android.com/details?id=com.google.android.apps.unveil Glossy Black Weather: http://market.android.com/details?id=com.androidapps.widget.weather.skin.glossyblack Fox News: http://market.android.com/details?id=com.foxnews.android Foursquare: http://market.android.com/details?id=com.joelapenna.foursquared FBReader: http://market.android.com/details?id=org.geometerplus.zlibrary.ui.android Fandango: http://market.android.com/details?id=com.fandango Facebook: http://market.android.com/details?id=com.facebook.katana Extensive Notes Pro: http://market.android.com/details?id=com.flufflydelusions.app.extensive_notes_donate Expense Manager: http://market.android.com/details?id=com.expensemanager Espresso UI (LightShow w/ Slide): http://market.android.com/details?id=com.jaguirre.slide.lightshow Engadget: http://market.android.com/details?id=com.aol.mobile.engadget Earth: http://market.android.com/details?id=com.google.earth Drudge: http://market.android.com/details?id=com.iavian.dreport Dropbox: http://market.android.com/details?id=com.dropbox.android DroidForums: http://market.android.com/details?id=com.quoord.tapatalkdrodiforums.activity DroidArmor ADW: http://market.android.com/details?id=mobi.addesigns.droidarmorADW Droid Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.skins.white Droid 2 Bootstrapper: http://market.android.com/details?id=com.koushikdutta.droid2.bootstrap doubleTwist: http://market.android.com/details?id=com.doubleTwist.androidPlayer Documents To Go: http://market.android.com/details?id=com.dataviz.docstogo Digital Clock Widget: http://market.android.com/details?id=com.maize.digitalClock Desk Home: http://market.android.com/details?id=com.cowbellsoftware.deskdock Default Clock: http://market.android.com/details?id=com.androidapps.widget.clock.skins.defaultclock Daily Expense Manager: http://market.android.com/details?id=com.techahead.ExpenseManager ConnectBot: http://market.android.com/details?id=org.connectbot Colorized Weather Icons: http://market.android.com/details?id=com.androidapps.widget.weather.colorized Chrome to Phone: http://market.android.com/details?id=com.google.android.apps.chrometophone CardStar: http://market.android.com/details?id=com.cardstar.android Books: http://market.android.com/details?id=com.google.android.apps.books Black Ipad Toggle: http://market.android.com/details?id=com.androidapps.toggle.widget.skin.blackipad Black Glass ADW Theme: http://market.android.com/details?id=com.dtanquary.blackglassadwtheme Bing: http://market.android.com/details?id=com.microsoft.mobileexperiences.bing BeyondPod Unlock Key: http://market.android.com/details?id=mobi.beyondpod.unlockkey BeyondPod: http://market.android.com/details?id=mobi.beyondpod BeejiveIM: http://market.android.com/details?id=com.beejive.im Beautiful Widgets Animations Addon: http://market.android.com/details?id=com.levelup.bw.forecast Beautiful Widgets: http://market.android.com/details?id=com.levelup.beautifulwidgets Beautiful Live Weather: http://market.android.com/details?id=com.levelup.beautifullive BBC News: http://market.android.com/details?id=net.jimblackler.newswidget Barnacle Wifi Tether: http://market.android.com/details?id=net.szym.barnacle Barcode Scanner: http://market.android.com/details?id=com.google.zxing.client.android ASTRO SMB Module: http://market.android.com/details?id=com.metago.astro.smb ASTRO Pro: http://market.android.com/details?id=com.metago.astro.pro ASTRO Bluetooth Module: http://market.android.com/details?id=com.metago.astro.network.bluetooth ASTRO: http://market.android.com/details?id=com.metago.astro AppBrain App Market: http://market.android.com/details?id=com.appspot.swisscodemonkeys.apps App Drawer Icon Pack: http://market.android.com/details?id=com.adwtheme.appdrawericonpack androidVNC: http://market.android.com/details?id=android.androidVNC AndroidGuys: http://market.android.com/details?id=com.handmark.mpp.AndroidGuys Android System Info: http://market.android.com/details?id=com.electricsheep.asi AndFTP: http://market.android.com/details?id=lysesoft.andftp ADWTheme Red: http://market.android.com/details?id=adw.theme.red ADWLauncher EX: http://market.android.com/details?id=org.adwfreak.launcher ADW.Theme.One: http://market.android.com/details?id=org.adw.theme.one ADW.Faded theme: http://market.android.com/details?id=com.xrcore.adwtheme.faded ADW Gingerbread: http://market.android.com/details?id=me.robertburns.android.adwtheme.gingerbread Advanced Task Killer Free: http://market.android.com/details?id=com.rechild.advancedtaskkiller Adobe Reader: http://market.android.com/details?id=com.adobe.reader Adobe Flash Player 10.1: http://market.android.com/details?id=com.adobe.flashplayer Adobe AIR: http://market.android.com/details?id=com.adobe.air 3G Auto OnOff: http://market.android.com/details?id=com.yuantuo --- Generated by ShareMyApps http://market.android.com/details?id=com.mattlary.shareMyApps Sent from my Droid

    Read the article

  • git on HTTP with gitolite and nginx

    - by Arnaud
    I am trying to setup a server where my git repo would be accessible with HTTP(S). I am using gitolite and nginx (and gitlab for web interface but I doubt it makes any difference). I have searched the whole afternoon and I think I'm stuck. I have think I have understood that nginx needs fcgiwrap to work with gitolite, so I tried several configurations, but none of them work. My repositories are at /home/git/repositories. Here's the three nginx configurations I have tried. 1: location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 2: location ~ /git(/.*) { fastcgi_pass localhost:9001; include /etc/nginx/fcgiwrap.conf; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; fastcgi_param PATH_INFO $1; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... fatal: http://myservername/projectname.git/info/refs not found: did you run git update-server-info on the server? and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed 3: location ~ ^.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx))$ { root /home/git/repositories/; } location ~ ^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack)$ { root /home/git/repositories; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param PATH_INFO $uri; fastcgi_param GIT_PROJECT_ROOT /home/git/repositories; include /etc/nginx/fcgiwrap.conf; } Result: > git clone http://myservername/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/projectname.git/info/refs fatal: HTTP request failed and > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 502 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Also note that with any of those configurations, when I try to clone with a project name that actually doesn't exist, I get a 502 error. Does anyone already succeeded in doing this? What am I doing wrong? Thanks. UPDATE: nginx error log file said: 2012/04/05 17:34:50 [crit] 21335#0: *50 connect() to unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while connecting to upstream, client: 192.168.12.201, server: myservername, request: "GET /git/oct_editor.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" So I changed permissions for /var/run/fcgiwrap.socket, and now I have : > git clone http://myservername/git/projectname.git test/ Cloning into test... error: The requested URL returned error: 403 while accessing http://myservername/git/projectname.git/info/refs fatal: HTTP request failed Here is the error.log file I have now: 2012/04/05 17:36:52 [error] 21335#0: *78 FastCGI sent in stderr: "Cannot chdir to script directory (/usr/lib/git-core/git/projectname.git/info)" while reading response header from upstream, client: 192.168.12.201, server: myservername, request: "GET /git/projectname.git/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "myservername" I keep on investigating.

    Read the article

  • July 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m super excited to announce the July 2013 release of the Ajax Control Toolkit. You can download the new version of the Ajax Control Toolkit from CodePlex (http://ajaxControlToolkit.CodePlex.com) or install the Ajax Control Toolkit from NuGet: With this release, we have completely rewritten the way the Ajax Control Toolkit combines, minifies, gzips, and caches JavaScript files. The goal of this release was to improve the performance of the Ajax Control Toolkit and make it easier to create custom Ajax Control Toolkit controls. Improving Ajax Control Toolkit Performance Previous releases of the Ajax Control Toolkit optimized performance for a single page but not multiple pages. When you visited each page in an app, the Ajax Control Toolkit would combine all of the JavaScript files required by the controls in the page into a new JavaScript file. So, even if every page in your app used the exact same controls, visitors would need to download a new combined Ajax Control Toolkit JavaScript file for each page visited. Downloading new scripts for each page that you visit does not lead to good performance. In general, you want to make as few requests for JavaScript files as possible and take maximum advantage of caching. For most apps, you would get much better performance if you could specify all of the Ajax Control Toolkit controls that you need for your entire app and create a single JavaScript file which could be used across your entire app. What a great idea! Introducing Control Bundles With this release of the Ajax Control Toolkit, we introduce the concept of Control Bundles. You define a Control Bundle to indicate the set of Ajax Control Toolkit controls that you want to use in your app. You define Control Bundles in a file located in the root of your application named AjaxControlToolkit.config. For example, the following AjaxControlToolkit.config file defines two Control Bundles: <ajaxControlToolkit> <controlBundles> <controlBundle> <control name="CalendarExtender" /> <control name="ComboBox" /> </controlBundle> <controlBundle name="CalendarBundle"> <control name="CalendarExtender"></control> </controlBundle> </controlBundles> </ajaxControlToolkit> The first Control Bundle in the file above does not have a name. When a Control Bundle does not have a name then it becomes the default Control Bundle for your entire application. The default Control Bundle is used by the ToolkitScriptManager by default. For example, the default Control Bundle is used when you declare the ToolkitScriptManager like this:  <ajaxToolkit:ToolkitScriptManager runat=”server” /> The default Control Bundle defined in the file above includes all of the scripts required for the CalendarExtender and ComboBox controls. All of the scripts required for both of these controls are combined, minified, gzipped, and cached automatically. The AjaxControlToolkit.config file above also defines a second Control Bundle with the name CalendarBundle. Here’s how you would use the CalendarBundle with the ToolkitScriptManager: <ajaxToolkit:ToolkitScriptManager runat="server"> <ControlBundles> <ajaxToolkit:ControlBundle Name="CalendarBundle" /> </ControlBundles> </ajaxToolkit:ToolkitScriptManager> In this case, only the JavaScript files required by the CalendarExtender control, and not the ComboBox, would be downloaded because the CalendarBundle lists only the CalendarExtender control. You can use multiple named control bundles with the ToolkitScriptManager and you will get all of the scripts from both bundles. Support for ControlBundles is a new feature of the ToolkitScriptManager that we introduced with this release. We extended the ToolkitScriptManager to support the Control Bundles that you can define in the AjaxControlToolkit.config file. Let me be explicit about the rules for Control Bundles: 1. If you do not create an AjaxControlToolkit.config file then the ToolkitScriptManager will download all of the JavaScript files required for all of the controls in the Ajax Control Toolkit. This is the easy but low performance option. 2. If you create an AjaxControlToolkit.config file and create a ControlBundle without a name then the ToolkitScriptManager uses that Control Bundle by default. For example, if you plan to use only the CalendarExtender and ComboBox controls in your application then you should create a default bundle that lists only these two controls. 3. If you create an AjaxControlToolkit.config file and create one or more named Control Bundles then you can use these named Control Bundles with the ToolkitScriptManager. For example, you might want to use different subsets of the Ajax Control Toolkit controls in different sections of your app. I should also mention that you can use the AjaxControlToolkit.config file with custom Ajax Control Toolkit controls – new controls that you write. For example, here is how you would register a set of custom controls from an assembly named MyAssembly: <ajaxControlToolkit> <controlBundles> <controlBundle name="CustomBundle"> <control name="MyAssembly.MyControl1" assembly="MyAssembly" /> <control name="MyAssembly.MyControl2" assembly="MyAssembly" /> </controlBundle> </ajaxControlToolkit> What about ASP.NET Bundling and Minification? The idea of Control Bundles is similar to the idea of Script Bundles used in ASP.NET Bundling and Minification. You might be wondering why we didn’t simply use Script Bundles with the Ajax Control Toolkit. There were several reasons. First, ASP.NET Bundling does not work with scripts embedded in an assembly. Because all of the scripts used by the Ajax Control Toolkit are embedded in the AjaxControlToolkit.dll assembly, ASP.NET Bundling was not an option. Second, Web Forms developers typically think at the level of controls and not at the level of individual scripts. We believe that it makes more sense for a Web Forms developer to specify the controls that they need in an app (CalendarExtender, ToggleButton) instead of the individual scripts that they need in an app (the 15 or so scripts required by the CalenderExtender). Finally, ASP.NET Bundling does not work with older versions of ASP.NET. The Ajax Control Toolkit needs to support ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Therefore, using ASP.NET Bundling was not an option. There is nothing wrong with using Control Bundles and Script Bundles side-by-side. The ASP.NET 4.0 and 4.5 ToolkitScriptManager supports both approaches to bundling scripts. Using the AjaxControlToolkit.CombineScriptsHandler Browsers cache JavaScript files by URL. For example, if you request the exact same JavaScript file from two different URLs then the exact same JavaScript file must be downloaded twice. However, if you request the same JavaScript file from the same URL more than once then it only needs to be downloaded once. With this release of the Ajax Control Toolkit, we have introduced a new HTTP Handler named the AjaxControlToolkit.CombineScriptsHandler. If you register this handler in your web.config file then the Ajax Control Toolkit can cache your JavaScript files for up to one year in the future automatically. You should register the handler in two places in your web.config file: in the <httpHandlers> section and the <system.webServer> section (don’t forget to register the handler for the AjaxFileUpload while you are there!). <httpHandlers> <add verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </httpHandlers> <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <handlers> <add name="AjaxFileUploadHandler" verb="*" path="AjaxFileUploadHandler.axd" type="AjaxControlToolkit.AjaxFileUploadHandler, AjaxControlToolkit" /> <add name="CombineScriptsHandler" verb="*" path="CombineScriptsHandler.axd" type="AjaxControlToolkit.CombineScriptsHandler, AjaxControlToolkit" /> </handlers> <system.webServer> The handler is only used in release mode and not in debug mode. You can enable release mode in your web.config file like this: <compilation debug=”false”> You also can override the web.config setting with the ToolkitScriptManager like this: <act:ToolkitScriptManager ScriptMode=”Release” runat=”server”/> In release mode, scripts are combined, minified, gzipped, and cached with a far future cache header automatically. When the handler is not registered, scripts are requested from the page that contains the ToolkitScriptManager: When the handler is registered in the web.config file, scripts are requested from the handler: If you want the best performance, always register the handler. That way, the Ajax Control Toolkit can cache the bundled scripts across page requests with a far future cache header. If you don’t register the handler then a new JavaScript file must be downloaded whenever you travel to a new page. Dynamic Bundling and Minification Previous releases of the Ajax Control Toolkit used a Visual Studio build task to minify the JavaScript files used by the Ajax Control Toolkit controls. The disadvantage of this approach to minification is that it made it difficult to create custom Ajax Control Toolkit controls. Starting with this release of the Ajax Control Toolkit, we support dynamic minification. The JavaScript files in the Ajax Control Toolkit are minified at runtime instead of at build time. Scripts are minified only when in release mode. You can specify release mode with the web.config file or with the ToolkitScriptManager ScriptMode property. Because of this change, the Ajax Control Toolkit now depends on the Ajax Minifier. You must include a reference to AjaxMin.dll in your Visual Studio project or you cannot take advantage of runtime minification. If you install the Ajax Control Toolkit from NuGet then AjaxMin.dll is added to your project as a NuGet dependency automatically. If you download the Ajax Control Toolkit from CodePlex then the AjaxMin.dll is included in the download. This change means that you no longer need to do anything special to create a custom Ajax Control Toolkit. As an open source project, we hope more people will contribute to the Ajax Control Toolkit (Yes, I am looking at you.) We have been working hard on making it much easier to create new custom controls. More on this subject with the next release of the Ajax Control Toolkit. A Single Visual Studio Solution We also made substantial changes to the Visual Studio solution and projects used by the Ajax Control Toolkit with this release. This change will matter to you only if you need to work directly with the Ajax Control Toolkit source code. In previous releases of the Ajax Control Toolkit, we maintained separate solution and project files for ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5. Starting with this release, we now support a single Visual Studio 2012 solution that takes advantage of multi-targeting to build ASP.NET 3.5, ASP.NET 4.0, and ASP.NET 4.5 versions of the toolkit. This change means that you need Visual Studio 2012 to open the Ajax Control Toolkit project downloaded from CodePlex. For details on how we setup multi-targeting, please see Budi Adiono’s blog post: http://www.budiadiono.com/2013/07/25/visual-studio-2012-multi-targeting-framework-project/ Summary You can take advantage of this release of the Ajax Control Toolkit to significantly improve the performance of your website. You need to do two things: 1) You need to create an AjaxControlToolkit.config file which lists the controls used in your app and 2) You need to register the AjaxControlToolkit.CombineScriptsHandler in the web.config file. We made substantial changes to the Ajax Control Toolkit with this release. We think these changes will result in much better performance for multipage apps and make the process of building custom controls much easier. As always, we look forward to hearing your feedback.

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >