Search Results

Search found 5383 results on 216 pages for 'ffmpeg hosting'.

Page 207/216 | < Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >

  • Wear and tear on server hard drive from filesystem polling by PHP script

    - by jackie
    So I'm working on a discussion platform, and various clients will visit http://host/thread.php, which will render the discussion thread to date in addition to a form to submit a new post. When a new post is submitted, I would like all of the other clients with browser windows open to have it appear in near-real-time. One of the constraints of my script is that it may not use a DBMS and it must stay in the filesystem. Additionally, I can't use any PECL/PEAR extensions like inotify or anything like that for IPC. The flow will look like this: Client A requests thread.php and the thread is so far empty, but nonetheless it opens a Server-Side Event at eventPusher.php. Client B does the same. Client A fills out a post in the form and and submits (POSTs) it to subHandler.php. ??? (subHandler stores the new submission into the main thread storefile which gets read from when a fresh, new client requests thread.php, in addition to somehow signalling to the continually-running eventPusher event-source that a new comment was posted and that it should echo the event-json to the client. How, exactly, it will send this signal I'm yet unsure of, but there are a few options that I've thought of -- this is the crux of the question, so see below for more clarification) eventPusher.php happily pushes the new event to the client and it shows up soon after it was originally submitted on all clients who have the page open's screens. Now for the #4 missing-link mystery-step, I see a few problems. I mean, either way, eventPusher is gonna be doing a while loop of some sort -- it's gonna be polling something, I think that much is clear. (If that's a bad assumption please do let me know.) Now, the simplest way would be subHandler gets invoked on the form submission, writes it to the main store in addition to newComments.xml, then exits without doing anything else. Then eventPusher checks in newComments.xml every X seconds (by the way, what would be a reasonable time interval here?) and if it finds something then it emits an event to the client. Now, my fear with this is that the server's hard drive will have to constantly start spinning up. Maybe this isn't the case, perhaps it would just get cached in RAM and the linux kernel would take care of this transparently such that filesystem access doesn't actually engage the device because the kernel knows that that particular file hasn't changed since last read. * idea #2: I have no idea how to go about this, but perhaps there is a variable scope that gets stored in general RAM on the system which can be read by any process. Like if we mega-exported a bash variable so that $new_post is normally false but it gets toggled to true by subHandler, and then back to flase once it's pushed to the client. I doubt there's such a variable scope in PHP directly, but I struggle with the concept of variable scope, I just can't seem to understand it no matter what I read on it. * idea #3: eventPusher queries ps in its whileloop for another instance of itself. If there's not another eventPusher active then it's highly unlikely that new comments will be getting submitted. It's okay if this only works =90% of the time, it doesn't need to be completely foolproof. * idea #4: eventPusher queries DMESG to see if that file's been written to recently. So to sum everything up, I need to have inter-php-script-communication in near-real-time that will work on a standard mod_php shared hosting setup without any elevated privileges, PHP addon modules, or other system adjustments that can't be done from the PHP script itself at runtime. With*out* spinning up the drive more than a few times. No SQL servers either. Apologies if my english isn't the best, I'm still trying to improve on it.

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • Where's my memory?! Nginx + PHP-FPM front end webserver slows to a crawl...

    - by incredimike
    I'm not sure if I have a problem with a memory leak (as my hosting company suggests), or if we both need to read http://linuxatemyram.com. Maybe you clever people can help us out? This is a front-end webserver VM running essentially only nginx & php-fpm on RHEL 5.5. This server is powering Magento, a PHP eCommerce thinggy. The server is running in a shared environment, but we're changing that soon. Anyway.. after a reboot the server runs just fine, but within a day it will grind itself into nothingness. Pages will take literally 2 minutes to load, CPU spikes like crazy, etc.. The console is even sluggish when I SSH in. It's like my whole server is being brought to its knees. I've also been monitoring the DB server via top and tcpdumping incoming traffic. The DB stays idle for a good portion of that "slow" load time. When i start seeing queries coming from the front-end server, the page loads soon afterward. Here are some stats after me logging in during a slow-down, after restarting php-fpm: [mike@front01 ~]$ free -m total used free shared buffers cached Mem: 5963 5217 745 0 192 314 -/+ buffers/cache: 4711 1252 Swap: 4047 4 4042 [mike@front01 ~]$ top top - 11:38:55 up 2 days, 1:01, 3 users, load average: 0.06, 0.17, 0.21 Tasks: 131 total, 1 running, 130 sleeping, 0 stopped, 0 zombie Cpu0 : 0.0%us, 0.3%sy, 0.0%ni, 99.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st Cpu1 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 6106800k total, 5361288k used, 745512k free, 199960k buffers Swap: 4144728k total, 4976k used, 4139752k free, 328480k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31806 apache 15 0 601m 120m 37m S 0.0 2.0 0:22.23 php-fpm 31805 apache 15 0 549m 66m 31m S 0.0 1.1 0:14.54 php-fpm 31809 apache 16 0 547m 65m 32m S 0.0 1.1 0:12.84 php-fpm 32285 apache 15 0 546m 63m 33m S 0.0 1.1 0:09.22 php-fpm 32373 apache 15 0 546m 62m 32m S 0.0 1.1 0:09.66 php-fpm 31808 apache 16 0 543m 60m 35m S 0.0 1.0 0:18.93 php-fpm 31807 apache 16 0 533m 49m 30m S 0.0 0.8 0:08.93 php-fpm 32092 apache 15 0 535m 48m 27m S 0.0 0.8 0:06.67 php-fpm 4392 root 18 0 194m 10m 7184 S 0.0 0.2 0:06.96 cvd 4064 root 15 0 154m 8304 4220 S 0.0 0.1 3:55.57 snmpd 4394 root 15 0 119m 5660 2944 S 0.0 0.1 0:02.84 EvMgrC 31804 root 15 0 519m 5180 932 S 0.0 0.1 0:00.46 php-fpm 4138 ntp 15 0 23396 5032 3904 S 0.0 0.1 0:02.38 ntpd 643 nginx 15 0 95276 4408 1524 S 0.0 0.1 0:01.15 nginx 5131 root 16 0 90128 3340 2600 S 0.0 0.1 0:01.41 sshd 28467 root 15 0 90128 3340 2600 S 0.0 0.1 0:00.35 sshd 32602 root 16 0 90128 3332 2600 S 0.0 0.1 0:00.36 sshd 1614 root 16 0 90128 3308 2588 S 0.0 0.1 0:00.02 sshd 2817 root 5 -10 7216 3140 1724 S 0.0 0.1 0:03.80 iscsid 4161 root 15 0 66948 2340 800 S 0.0 0.0 0:10.35 sendmail 1617 nicole 17 0 53876 2000 1516 S 0.0 0.0 0:00.02 sftp-server ... Is there anything else I should be looking at, or any more information that might be useful? I'm just a developer, but the slowdowns on this system worry me and make it hard to do my work.. Help me out, ServerFault!

    Read the article

  • Trying to connect phpMyAdmin to remote mySQL server ( 2002: can't connect )

    - by Malcolm Jones
    Trying to get phpMyAdmin to talk to a remote mySQL server. The config is below and there is already a user set up in mySQL DB to be able to log in from the specified host that PMA sits on. Hosting is provided by Rackspace (Rightscale) and both cloud servers behind the same firewall. [config.inc.php] <?php $cfg['blowfish_secret'] = ''; $i = 0; $i++; $cfg['Servers'][$i]['host'] = 'XX.XX.XX.XX'; // MySQL hostname or IP address $cfg['Servers'][$i]['port'] = ''; // MySQL port - leave blank for default port $cfg['Servers'][$i]['socket'] = ''; // Path to the socket - leave blank for default socket $cfg['Servers'][$i]['connect_type'] = 'tcp'; // How to connect to MySQL server ('tcp' or 'socket') $cfg['Servers'][$i]['extension'] = 'mysql'; // The php MySQL extension to use ('mysql' or 'mysqli') $cfg['Servers'][$i]['compress'] = FALSE; // Use compressed protocol for the MySQL connection // (requires PHP >= 4.3.0) $cfg['Servers'][$i]['controluser'] = ''; // MySQL control user settings // (this user must have read-only $cfg['Servers'][$i]['controlpass'] = ''; // access to the "mysql/user" // and "mysql/db" tables). // The controluser is also // used for all relational // features (pmadb) $cfg['Servers'][$i]['auth_type'] = 'config'; // Authentication method (config, http or cookie based)? $cfg['Servers'][$i]['user'] = 'USERNAME'; // MySQL user $cfg['Servers'][$i]['password'] = 'PASSWORD'; // MySQL password (only needed // with 'config' auth_type) $cfg['Servers'][$i]['only_db'] = ''; // If set to a db-name, only // this db is displayed in left frame // It may also be an array of db-names, where sorting order is relevant. $cfg['Servers'][$i]['hide_db'] = ''; // Database name to be hidden from listings $cfg['Servers'][$i]['verbose'] = ''; // Verbose name for this host - leave blank to show the hostname $cfg['Servers'][$i]['pmadb'] = ''; // Database used for Relation, Bookmark and PDF Features // (see scripts/create_tables.sql) // - leave blank for no support // DEFAULT: 'phpmyadmin' $cfg['Servers'][$i]['bookmarktable'] = ''; // Bookmark table // - leave blank for no bookmark support // DEFAULT: 'pma_bookmark' $cfg['Servers'][$i]['relation'] = ''; // table to describe the relation between links (see doc) // - leave blank for no relation-links support // DEFAULT: 'pma_relation' $cfg['Servers'][$i]['table_info'] = ''; // table to describe the display fields // - leave blank for no display fields support // DEFAULT: 'pma_table_info' $cfg['Servers'][$i]['table_coords'] = ''; // table to describe the tables position for the PDF schema // - leave blank for no PDF schema support // DEFAULT: 'pma_table_coords' $cfg['Servers'][$i]['pdf_pages'] = ''; // table to describe pages of relationpdf // - leave blank if you don't want to use this // DEFAULT: 'pma_pdf_pages' $cfg['Servers'][$i]['column_info'] = ''; // table to store column information // - leave blank for no column comments/mime types // DEFAULT: 'pma_column_info' $cfg['Servers'][$i]['history'] = ''; // table to store SQL history // - leave blank for no SQL query history // DEFAULT: 'pma_history' $cfg['Servers'][$i]['verbose_check'] = TRUE; // set to FALSE if you know that your pma_* tables // are up to date. This prevents compatibility // checks and thereby increases performance. $cfg['Servers'][$i]['AllowRoot'] = TRUE; // whether to allow root login $cfg['Servers'][$i]['AllowDeny']['order'] // Host authentication order, leave blank to not use = ''; $cfg['Servers'][$i]['AllowDeny']['rules'] // Host authentication rules, leave blank for defaults = array(); Please let me know if you need anymore info. -- Malcolm

    Read the article

  • Postfix not sending/allowing receiving of messages after server (hardware) changed

    - by 537mfb
    We had na old notebook runing Ubuntu 12.04 working as a web/ftp/mail server and it worked but since the notebook was a notebook and pretty old and unreliable, a desktop was bought to replace it before it stopped working all together. Due to issues with the new desktop's vídeo card, we couldn't use Ubuntu 12.04 so we installed Ubuntu 13.10 and wen't about configuring it. Since we removed the notebook from the network, we kept the same Computer Name and local IP address to make things as close to the old server as possible configuration-wise. However, something has gone wrong since Postfix is throwing error 451 4.3.0 lookup faillure on every attempt to send a mail, and no email can be received either. Our main.cf file is a copy of the one we were using (and working) on the old server (notice we use EHCP) # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name powered by Easy Hosting Control Panel (ehcp) on Ubuntu, www.ehcp.net biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no myhostname = m21-traducoes.com.pt relayhost = mydestination = localhost, 89.152.248.139 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 89.152.248.0/24 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, proxy:mysql:/etc/postfix/mysql-virtual_email2email.cf transport_maps = proxy:mysql:/etc/postfix/mysql-virtual_transports.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom virtual_create_maildirsize = yes virtual_mailbox_extended = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailbox_limit_maps.cf virtual_mailbox_limit_override = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_overquota_bounce = yes debug_peer_list = sender_canonical_maps = debug_peer_level = 1 proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $mynetworks $virtual_mailbox_limit_maps $transport_maps alias_maps = hash:/etc/aliases smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtpd_destination_concurrency_limit = 2 smtpd_destination_rate_delay = 1s smtpd_extra_recipient_limit = 10 disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 This configuration was working before but now everytime i try to send a mail in squirrelmail it reports: Message not sent. Server replied: Requested action aborted: error in processing 451 4.3.0 <[email protected]>: Temporary lookup failure And i can't send mail to it from outsider either. Any ideas? EDIT: Here are some issues MXToolBox reports to my domain, answering hopefully to @Teun Vink: BlackList Mail Server Web Server DNS Error 4 0 2 0 Warnings 0 0 0 3 Passed 0 6 3 12 So the domain is on some blacklist, but that doesn't explain the error at all No mail server issues found (except it's not working) Those two web server errors it's because i don't have HTTPS workin (No SSL Certificate) so the test fails Those 3 DNS warnings we're already there when it was working with the other machine and are related to stuff i can't control: SOA Refresh Value is outside of the recommended range SOA Expire Value out of recommended range SOA NXDOMAIN Value too high I've searched and as far as i can tell only the guys who sold the retail can change those values and they won't. Edit2: I half solved the issue.on the new machine postfix was installed but postfix-mysql waasn't so he couldn't connect to the database (rookie mistake). After fixing that, i can now send mails to the outsider without any issues, however i am still not able to receive mails from utside. The sender doesn't get any message warning about the non-delivery but the message doesn't fall in the inbox and the log shows: Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: NOQUEUE: reject: RCPT from re lay4.ptmail.sapo.pt[212.55.154.24]: 451 4.3.5 <relay4.ptmail.sapo.pt[212.55.154. 24]>: Client host rejected: Server configuration error; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<sapo.pt> Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: disconnect from relay4.ptmail .sapo.pt[212.55.154.24]

    Read the article

  • Email from my new vps is marked as spam

    - by Chriswede
    I got a new vps from x10vps (x10hosting) and set up the domain via cloudflare. This is what the email looks like: Delivered-To: [email protected] Received: by 10.64.19.240 with SMTP id i16csp357708iee; Tue, 9 Oct 2012 01:29:48 -0700 (PDT) Received: by 10.50.57.130 with SMTP id i2mr908846igq.56.1349771387599; Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id v8si25630942ica.46.2012.10.09.01.29.46 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:29:47 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected] Received: from nk11p03mm-asmtp010.mac.com ([17.158.232.169]:54276) by power.SOURCEAPE.COM with esmtp (Exim 4.80) (envelope-from <[email protected]>) id 1TLVBD-0004Ig-1Y for [email protected]; Tue, 09 Oct 2012 12:28:43 +0400 I then tried to enable SPF and DKIM and got following massage In order to ensure that SPF or DKIM takes effect, you must confirm that this server is an authoritative nameserver for chvw.de. If you need help, contact your hosting provider. Status: Enabled Warning: cPanel is unable to verify that this server is an authoritative nameserver for chvw.de. [?] and the email header now looks like this: Delivered-To: [email protected] Received: by 10.50.183.227 with SMTP id ep3csp14506igc; Tue, 9 Oct 2012 01:55:23 -0700 (PDT) Received: by 10.50.40.133 with SMTP id x5mr992934igk.32.1349772923717; Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Return-Path: <[email protected]> Received: from power.SOURCEAPE.COM ([198.91.90.116]) by mx.google.com with ESMTPS id ng8si25688859icb.42.2012.10.09.01.55.23 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 09 Oct 2012 01:55:23 -0700 (PDT) Received-SPF: temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) client-ip=198.91.90.116; Authentication-Results: mx.google.com; spf=temperror (google.com: error in processing during lookup of [email protected]: DNS timeout) [email protected]; dkim=neutral (bad format) [email protected] DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=chvw.de; s=default; h=Message-ID:Subject:To:From:Date:Content-Transfer-Encoding:Content-Type:MIME-Version; bh=iugsx3Lx0KnqjR7dj3wyQHnJ9pe/z3ntYEVk80k8rx4=; b=IrYsCtHdoPubXVOvLqxd7sLE/TyQTS5P3OrEg5SSUSKnQQcQ/fWWyBrmsrgkFSsw6jCmmRWMDR09vH5bQRpFPMA57B7pf8QRKhwXOWFBV+GnVUqICsfRjnNPvhx/lNp5; Received: from localhost ([127.0.0.1]:46539 helo=direct.chvw.de) by power.SOURCEAPE.COM with esmtpa (Exim 4.80) (envelope-from <[email protected]>) id 1TLVb0-0004dZ-Kd for [email protected]; Tue, 09 Oct 2012 12:55:22 +0400

    Read the article

  • What's going on with my server? High load, lots of idle CPU time, low disk utilization

    - by Jonathan
    I run a web site and send a legitimate opt-in, daily email newsletter to subscribers. Both the web hosting and email sending are done by the same machine. I have about 100,000 subscribers who have opted in to my daily email newsletter. My PHP script did a pretty good job sending mail to all of them until fairly recently, but as the list has grown I can't keep up. When I run top, I have very high load--usually at least 6 or 7, sometimes as high as 15--even though I only have two CPUs. However, when I run sar, my CPU is idle an average of about 30% of the time. So, it seems I'm not CPU bound. When I run iostat, it seems as though I'm not disk bound because my %util for each device is very low (no more than 5%). Given that I don't seem to be CPU bound or disk bound, why is top reporting such high load? Additionally, since I don't seem to be CPU bound or disk bound, why is my email sending script not able to keep up? Here's what I see when running top: top - 11:33:28 up 74 days, 18:49, 2 users, load average: 7.65, 8.79, 8.28 Tasks: 168 total, 5 running, 162 sleeping, 0 stopped, 1 zombie Cpu(s): 38.9%us, 58.6%sy, 0.8%ni, 0.0%id, 0.7%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 3083012k total, 2144436k used, 938576k free, 281136k buffers Swap: 2048248k total, 39164k used, 2009084k free, 1470412k cached Here's what I see when running iostat -mx: avg-cpu: %user %nice %system %iowait %steal %idle 34.80 1.20 55.24 0.37 0.00 8.38 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.19 71.70 1.59 29.45 0.02 0.07 5.90 0.55 17.82 1.16 3.59 sda1 0.00 0.00 0.00 0.00 0.00 0.00 7.10 0.00 13.80 13.72 0.00 sda2 0.05 50.45 1.13 24.57 0.01 0.29 24.25 0.35 13.43 1.15 2.97 sda3 0.05 10.17 0.20 2.33 0.01 0.05 43.75 0.05 20.96 2.45 0.62 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 70.50 70.50 0.00 sda5 0.07 0.22 0.03 0.07 0.00 0.00 32.84 0.08 856.19 8.03 0.08 sda6 0.02 5.45 0.03 0.72 0.00 0.02 67.55 0.02 26.72 5.26 0.39 sda7 0.00 1.56 0.00 0.42 0.00 0.01 38.04 0.00 8.88 5.84 0.24 sda8 0.01 3.84 0.20 1.35 0.00 0.02 28.55 0.05 31.90 4.08 0.63 Here's what I see when running sar: 09:40:02 AM CPU %user %nice %system %iowait %steal %idle 09:50:01 AM all 30.59 1.01 49.80 0.23 0.00 18.37 10:00:08 AM all 31.73 0.92 51.66 0.13 0.00 15.55 10:10:06 AM all 30.43 0.99 48.94 0.26 0.00 19.38 10:20:01 AM all 29.58 1.00 47.76 0.25 0.00 21.42 10:30:01 AM all 29.37 1.02 47.30 0.18 0.00 22.13 10:40:06 AM all 32.50 1.01 52.94 0.16 0.00 13.39 10:50:01 AM all 30.49 1.00 49.59 0.15 0.00 18.77 11:00:01 AM all 29.43 0.99 47.71 0.17 0.00 21.71 11:10:07 AM all 30.26 0.93 49.48 0.83 0.00 18.50 11:20:02 AM all 29.83 0.81 48.51 1.32 0.00 19.52 11:30:06 AM all 31.18 0.88 51.33 1.15 0.00 15.47 Average: all 26.21 1.15 42.62 0.48 0.00 29.54 Here are the top handful of processes listed at the particular time I happened to run top -c: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8180 mysql 16 0 57448 19m 2948 S 26.6 0.7 4702:26 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/bristno.pid --skip-external-locking 26956 brristno 17 0 0 0 0 Z 8.0 0.0 0:00.24 [php] <defunct> 26958 brristno 17 0 94408 43m 37m R 5.0 1.4 0:00.15 /usr/bin/php /home/brristno/public_html/dbv.php 22852 nobody 16 0 9628 2900 1524 S 0.7 0.1 0:00.17 /usr/local/apache/bin/httpd -k start -DSSL 8591 brristno 34 19 96896 13m 6652 S 0.3 0.4 0:29.82 /usr/local/bin/php /home/brristno/bin/mailer.php 1qwqyb6 i0gbor 24469 nobody 16 0 9628 2880 1508 S 0.3 0.1 0:00.08 /usr/local/apache/bin/httpd -k start -DSSL 25495 nobody 15 0 9628 2876 1500 S 0.3 0.1 0:00.06 /usr/local/apache/bin/httpd -k start -DSSL 26149 nobody 15 0 9628 2864 1504 S 0.3 0.1 0:00.04 /usr/local/apache/bin/httpd -k start -DSSL

    Read the article

  • Using PHP to connect to RADIUS works on one server but not another

    - by JDS
    I have a fleet of webservers that server a LAMP webapp broken into multiple customer apps by virtualhost/domain. The platform is Ubuntu 10.04 VM + PHP 5.3 + Apache 2.2.14, on top of VMware ESX (v4 I think). This stuff's not too important, though -- I'm just setting up the background. I have one customer that connects to a RADIUS server for authentication. We've found that the app responds as if some number of web servers are configured correctly and some are not. i.e. Apparently random authentication failures or successes, with no rhyme or reason. I did a lot of analysis of our fleet, and resolved it down to the differences between two specific web servers. I'll call them "A" and "B". "A" works. "B" does not. "Works" means "connects to and gets authentication data successfully from the RADIUS server". Ultimately, I'm looking for one thing that is different, and I've exhausted everything that I can come up with, so, looking for something else. Here are things I've looked at PHP package versions (all from Ubuntu repos). These are exactly the same across servers. PECL package. There are no PECL packages that aren't installed by apt. Other libraries or packages. Nothing that was network-related or RADIUS-related was different among servers. (There were some minor package differences, though.) Network or hosting environment. I found that some of the working servers were on the same physical environment as some not-working ones (i.e. same ESX containers). So, probably, the physical network layer is not the problem. Test case. I created a test case as follows. It works on the working servers, and fails on the not-working servers, very consistently. <?php $radius = radius_auth_open(); $username = 'theusername'; $password = 'thepassword'; $hostname = '12.34.56.78'; $radius_secret = '39wmmvxghg'; if (! radius_add_server($radius,$hostname,0,$radius_secret,5,3)) { die('Radius Error 1: ' . radius_strerror($radius) . "\n"); } if (! radius_create_request($radius,RADIUS_ACCESS_REQUEST)) { die('Radius Error 2: ' . radius_strerror($radius) . "\n"); } radius_put_attr($radius,RADIUS_USER_NAME,$username); radius_put_attr($radius,RADIUS_USER_PASSWORD,$password); switch (radius_send_request($radius)) { case RADIUS_ACCESS_ACCEPT: echo 'GOOD LOGIN'; break; case RADIUS_ACCESS_REJECT: echo 'BAD LOGIN'; break; case RADIUS_ACCESS_CHALLENGE: echo 'CHALLENGE REQUESTED'; break; default: die('Radius Error 3: ' . radius_strerror($radius) . "\n"); } ?>

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Doing TDD Silverlight 4 RC using Visual Studio 2010 RC

    - by user133992
    First I am glad to see better TDD support in VS2010. Support for generating code stubs from my tests is ok - not as good as more mature TDD plug-ins but a good start. I am looking for some best Silverlight 4.0 TDD practices. First Question: Anyone have links, recommendations? I know the new Silverlight Unit Test capabilities are much better (Jeff Wilcox's Mix Presentation). What I am focusing on right now is using TDD to develop pure Silverlight 4.0 Class Library projects - projects without a Silverlight UI project. I've been able to get it to work but not as cleanly as it should be. I can create an Empty VS project. Add A Silverlight 4 Class Library Project. Add a TestProject (not a silverlight Unit Test Project but a plain Test Project). Add a simple test in the Test Project such as: namespace Calculator.Test { [TestClass] public class CalculatorTests { [TestMethod] public void CalulatorAddTest() { Calc c = new Calc(); int expected = 10; int actual = c.Add(6, 4); Assert.AreEqual<int>(expected, actual); } } } Using the new Generate Type and Method from Test feature it will generate the following code in the Silverlight Project: namespace Calculator { public class Calc { public int Add(int p, int p_2) { throw new NotImplementedException(); } } } When I run the tests the first time it says the target assembly is Silverlight and not able to run test - Not exact text but the same general idea. When I change the implementation to: namespace Calculator { public class Calc { public int Add(int p, int p_2) { return p + p_2; } } } and re-run the test, it works fine and the test goes green. It also works for all other TDD code I generate after. I also get a warning Mark in the Test Project's reference to the Calculator Silverlight Class Library Assembly. Second Question: Any comments ideas if this just a bug in VS2010 RC or is Silverlight Class Library TDD not really supported. I have not created a Silverlight UI project or changed and build or debug settings so I have no idea what is hosting the silverlight DLL. Finally, some of the Silverlight Class Libraries I need to write will provide functionality that requires elevated Out-Of-Browser rights. Based on the above, it looks like I can use TDD Test Projects against regular Silverlight 4.0 Class Libraries, but I have no idea how I can TDD the elevated OOB functionality without also creating the UI component that gets installed. The UI piece is not really needed for the Library development and gets in the way of what I actually want to TDD. I know I can (and will) mock some of that functionality but at some point I will also need the real thing in my tests. Third Question: Any ideas how to TDD Silverlight 4.0 Class Library project that requires OOB elevated rights? Thanks!

    Read the article

  • WCF wsHttpBinding "There was no channel that could accept the message with action"

    - by Steffen Schindler
    I have a webservice in IIS. I'm trying to call a function but i get an errormessage like: There was no channel that could accept the message with action 'http://Datenlotsen.Cyquest/ICyquestService/ValidateSelfAssessment' I'm hosting it in an IIS in the standard website. There I created a virtual directory named "CyQuestwebservice". For the client side config i'm using Soap UI. That Tool generates the client config from the wsdl. my webconfig looks like this, can you help me?: <system.serviceModel> <extensions> <behaviorExtensions> <add name="wsdlExtensions" type="WCFExtras.Wsdl.WsdlExtensionsConfig, WCFExtras, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </behaviorExtensions> </extensions> <services> <service behaviorConfiguration="CyquestWebService.Service1Behavior" name="CyquestWebService.CyquestService"> <endpoint address="" behaviorConfiguration="EndPointBehavior" binding="wsHttpBinding" bindingNamespace="http://Datenlotsen.Cyquest" contract="CyquestWebService.ICyquestService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" bindingNamespace="http://Datenlotsen.Cyquest" contract="IMetadataExchange" /> </service> </services> <behaviors> <endpointBehaviors> <behavior name="EndPointBehavior" > <wsdlExtensions location="http://wssdev04.datenlotsen.intern/Cyquestwebservice/CyquestService.svc" singleFile="True"/> </behavior> </endpointBehaviors> <serviceBehaviors> <behavior name="CyquestWebService.Service1Behavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="traceListener" type="System.Diagnostics.XmlWriterTraceListener" initializeData= "c:\log\Traces.svclog" /> </listeners> </source> </sources> </system.diagnostics> </configuration>

    Read the article

  • Removing expired certificates from LDS (new ver of ADAM)

    - by jonthebrewer
    Hi all. This is my situation: We are in the process of replacing a certificate store currently hosted on Sun's iPlanet with Microsoft's Lightweight Directory Services (new version of ADAM with Server 2008). These certificates have been imported into LDS into an application partition (say o=myorg, C=AU). Under this structure I have around 40,000 OU's each one representing a customer under each customers OU are one or more user (iNetOrg) objects (around 60,000 in all). In each user are one or more certificates in the UserCertificate attribute. A combination of in-house written application code and proprietory PKI code reads and publishes these certficates to validate financial transactions. As the LDAP path of the certificates is stored within the customer certificates (and within the application code) and there is zero appetite for changing any of the code, I have had to pick up the iPlanet directory as a whole and dump it in LDS in the same structure. (I will not be using or hosting a Microsoft CA, just implementing an LDAP compliant directory to host these certificates) We have fully tested the application using the data in LDS and everything works fine - here is my dilema and question (finally, phew!) There was no process put in place for removing revoked or expired certificates, consequently the vast majority of the data is completely useless, the system has been running for about 8 years! I have done a quick analysis and I estimate that at least 80% of the data is no longer valid. As I am taking on responsibility for managing the directory I would like to start with a clean directory. Does anyone have any idea how I can cleanup these expired certificates. I am not a highly experienced scripter but have some background in VB. I have been researching the use of CAPICOM and have a feeling this may be able to be used but in exactly what way I am not sure?? I would prefer to write a script that I could specify an expiration date (say any certs that expired prior to 2010) then run against the LDS paritition. This way I can reuse the script periodically to cleanup the directory (as mentioned above - I have no way to adjust the applications that are writing the certs, this is with a third party). Another, less attractive, alternative is to massage the LDIF file (2.7 million lines!) to rip the certs out prior to the import Any help and advice MUCH appreciated. Cheers Jon

    Read the article

  • ASP.NET MVC 2 InputExtensions different on server than local machine

    - by Mike
    Hi everyone, So this is kind of a crazy problem to me, but I've had no luck Googling it. I have an ASP.NET MVC 2 application (under .NET 4.0) running locally just fine. When I upload it to my production server (under shared hosting) I get Compiler Error Message: CS1061: 'System.Web.Mvc.HtmlHelper' does not contain a definition for 'TextBoxFor' and no extension method 'TextBoxFor' accepting a first argument of type 'System.Web.Mvc.HtmlHelper' could be found (are you missing a using directive or an assembly reference?) for this code: <%= this.Html.TextBoxFor(person => person.LastName) %> This is one of the new standard extension methods in MVC 2. So I wrote some diagnostic code: System.Reflection.Assembly ass = System.Reflection.Assembly.GetAssembly(typeof(InputExtensions)); Response.Write("From GAC: " + ass.GlobalAssemblyCache.ToString() + "<br/>"); Response.Write("ImageRuntimeVersion: " + ass.ImageRuntimeVersion.ToString() + "<br/>"); Response.Write("Version: " + System.Diagnostics.FileVersionInfo.GetVersionInfo(ass.Location).ToString() + "<br/>"); foreach (var method in typeof(InputExtensions).GetMethods()) { Response.Write(method.Name + "<br/>"); } running locally (where it works fine), I get this as output: From GAC: True ImageRuntimeVersion: v2.0.50727 Version: File: C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll InternalName: System.Web.Mvc.dll OriginalFilename: System.Web.Mvc.dll FileVersion: 2.0.50217.0 FileDescription: System.Web.Mvc.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.50217.0 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral CheckBox CheckBox CheckBox CheckBox CheckBox CheckBox CheckBoxFor CheckBoxFor CheckBoxFor Hidden Hidden Hidden Hidden HiddenFor HiddenFor HiddenFor Password Password Password Password PasswordFor PasswordFor PasswordFor RadioButton RadioButton RadioButton RadioButton RadioButton RadioButton RadioButtonFor RadioButtonFor RadioButtonFor TextBox TextBox TextBox TextBox TextBoxFor TextBoxFor TextBoxFor ToString Equals GetHashCode GetType and when running on the production server (where it fails), I see: From GAC: True ImageRuntimeVersion: v2.0.50727 Version: File: C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll InternalName: System.Web.Mvc.dll OriginalFilename: System.Web.Mvc.dll FileVersion: 2.0.41001.0 FileDescription: System.Web.Mvc.dll Product: Microsoft® .NET Framework ProductVersion: 2.0.41001.0 Debug: False Patched: False PreRelease: False PrivateBuild: False SpecialBuild: False Language: Language Neutral CheckBox CheckBox CheckBox CheckBox CheckBox CheckBox Hidden Hidden Hidden Hidden Hidden Hidden Password Password Password Password RadioButton RadioButton RadioButton RadioButton RadioButton RadioButton TextBox TextBox TextBox TextBox ToString Equals GetHashCode GetType note that "TextBoxFor" is not present (hence the error). I have MVC referenced in the csproj: <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>True</SpecificVersion> <HintPath>References\System.Web.Mvc.dll</HintPath> <Private>True</Private> </Reference> I just can't figure it what to do next. Thoughts? Thanks! -Mike

    Read the article

  • Interesting issue with WCF wsHttpBinding through a Firewall

    - by Marko
    I have a web application deployed in an internet hosting provider. This web application consumes a WCF Service deployed at an IIS server located at my company’s application server, in order to have data access to the company’s database, the network guys allowed me to expose this WCF service through a firewall for security reasons. A diagram would look like this. [Hosted page] --- (Internet) --- |Firewall <Public IP>:<Port-X >| --- [IIS with WCF Service <Comp. Network Ip>:<Port-Y>] link text I also wanted to use wsHttpBinding to take advantage of its security features, and encrypt sensible information. After trying it out I get the following error: Exception Details: System.ServiceModel.EndpointNotFoundException: The message with To 'http://<IP>:<Port>/service/WCFService.svc' cannot be processed at the receiver, due to an AddressFilter mismatch at the EndpointDispatcher. Check that the sender and receiver's EndpointAddresses agree. Doing some research I found out that wsHttpBinding uses WS-Addressing standards, and reading about this standard I learned that the SOAP header is enhanced to include tags like ‘MessageID’, ‘ReplyTo’, ‘Action’ and ‘To’. So I’m guessing that, because the client application endpoint specifies the Firewall IP address and Port, and the service replies with its internal network address which is different from the Firewall’s IP, then WS-Addressing fires the above message. Which I think it’s a very good security measure, but it’s not quite useful in my scenario. Quoting the WS-Addressing standard submission (http://www.w3.org/Submission/ws-addressing/) "Due to the range of network technologies currently in wide-spread use (e.g., NAT, DHCP, firewalls), many deployments cannot assign a meaningful global URI to a given endpoint. To allow these ‘anonymous’ endpoints to initiate message exchange patterns and receive replies, WS-Addressing defines the following well-known URI for use by endpoints that cannot have a stable, resolvable URI. http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous" HOW can I configure my wsHttpBinding Endpoint to address my Firewall’s IP and to ignore or bypass the address specified in the ‘To’ WS-Addressing tag in the SOAP message header? Or do I have to change something in my service endpoint configuration? Help and guidance will be much appreciated. Marko. P.S.: While I find any solution to this, I’m using basicHttpBinding with absolutely no problem of course.

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • Some help needed with setting up the PERFECT workflow for web development with 2-3 guys using subver

    - by Roeland
    Hey guys! I run a small web development company along side with my brother and friend. After doing extensive research I have decided on using subversion for version control. Here is how I currently plan on running typical development. Keep in mind there are 3 of us each in a separate location. I set up an account with springloops (springloops.com) subversion hosting. Each time I work on a new project, I create a repository for it. So lets say in this case I am working on site1. I want to have 3 versions of the site on the internet: Web Development - This is the server me and the other developers publish to. (site1.dev.bythepixel.com) Client Preview - This is the server that we update every few days with a good revision for the client to see. (site1.bythepixel.com) Live Site - The site I publish to when going live (site1.com) Each web development machine (at each location) will have a local copy of xamp running virtual host to allow multiple websites to be worked on. The root of the local copy is set up to be the same as the local copy of the subversion repository. This is set up so we can make small tweaks and preview them immediately. When some work has been done, a commit is made to the repository for the site. I will have the dev site automatically be pushed (its an option in springloops). Then, whenever I feel ready to push to the client site I will do so. Now, I have a few concerns with those work flow: I am using codeigniter currently, and in the config file I generally set the root of the site. Ex. http://www.site1.com. So, it looks like each time I publish to one of the internet servers, I will have to modify the config file? Is there any way to make it so certain files are set for each server? So when I hit publish to client preview it just uploads the config file for the client preview server. I don't want the live site , the client preview site and the dev site to share the same mysql server for a variety of reasons. So does this once again mean that I have to adjust the db server info each time I push to a different site? Does this workflow make sense? If you have any suggestion please let me know. I plan for this to be the work flow I use for the next few year. I just need to put a system in place that allows for future expansion! Thanks a bunch!!

    Read the article

  • How can I load style resources from a dynamically loaded Silverlight application (XAP)?

    - by Tom
    I've followed Tim Heuer's video for dynamically loading other XAP's (into a 'master' Silverlight application), as well as some other links to tweak the loading of resources and am stuck on the particular issue of loading style resources from within the dynamically loaded XAP (i.e. the contents of Assets\Styles.xaml). When I run the master/hosting applcation, it successfully streams the dynamic XAP and I can read the deployment info etc. and load the assembly parts. However, when I actuall try to create an instance of a form from the Dynamic XAP, it fails with Cannot find a Resource with the Name/Key LayoutRootGridStyle which is in it's Assets\Styles.xaml file (it works if I run it directly so I know it's OK). For some reason these don't show up as application resources - not sure if I've totally got the wrong end of the stick, or am just missing something? Code snippet below (apologies it's a bit messy - just trying to get it working first) ... '' # Here's the code that reads the dynamic XAP from the web server ... '' #... wCli = New WebClient AddHandler wCli.OpenReadCompleted, AddressOf OpenXAPCompleted wCli.OpenReadAsync(New Uri("MyTest.xap", UriKind.Relative)) '' #... '' #Here's the sub that's called when openread is completed '' #... Private Sub OpenXAPCompleted(ByVal sender As Object, ByVal e As System.Net.OpenReadCompletedEventArgs) Dim sManifest As String = New StreamReader(Application.GetResourceStream(New StreamResourceInfo(e.Result, Nothing), New Uri("AppManifest.xaml", UriKind.Relative)).Stream).ReadToEnd Dim deploymentRoot As XElement = XDocument.Parse(sManifest).Root Dim deploymentParts As List(Of XElement) = _ (From assemblyParts In deploymentRoot.Elements().Elements() Select assemblyParts).ToList() Dim oAssembly As Assembly = Nothing For Each xElement As XElement In deploymentParts Dim asmPart As AssemblyPart = New AssemblyPart() Dim source As String = xElement.Attribute("Source").Value Dim sInfo As StreamResourceInfo = Application.GetResourceStream(New StreamResourceInfo(e.Result, "application/binary"), New Uri(source, UriKind.Relative)) If source = "MyTest.dll" Then oAssembly = asmPart.Load(sInfo.Stream) Else asmPart.Load(sInfo.Stream) End If Next Dim t As Type() = oAssembly.GetTypes() Dim AppClass = (From parts In t Where parts.FullName.EndsWith(".App") Select parts).SingleOrDefault() Dim mykeys As Array If Not AppClass Is Nothing Then Dim a As Application = DirectCast(oAssembly.CreateInstance(AppClass.FullName), Application) For Each strKey As String In a.Resources.Keys If Not Application.Current.Resources.Contains(strKey) Then Application.Current.Resources.Add(strKey, a.Resources(strKey)) End If Next End If Dim objectType As Type = oAssembly.GetType("MyTest.MainPage") Dim ouiel = Activator.CreateInstance(objectType) Dim myData As UIElement = DirectCast(ouiel, UIElement) Me.splMain.Children.Add(myData) Me.splMain.UpdateLayout() End Sub '' #... '' # And here's the line that fails with "Cannot find a Resource with the Name/Key LayoutRootGridStyle" '' # ... System.Windows.Application.LoadComponent(Me, New System.Uri("/MyTest;component/MainPage.xaml", System.UriKind.Relative)) '' #... any thoughts?

    Read the article

  • Adding OutputCache to a WebForm crashes my site :(

    - by Pure.Krome
    Hi folks, When I add either ... <%@ OutputCache Duration="600" Location="Any" VaryByParam="*" %> or <%@ OutputCache CacheProfile="CmsArticlesListOrItem" %> (.. and this into the web.config file...) <caching> <outputCacheSettings> <outputCacheProfiles> <add name="CmsArticlesListOrItem" duration="600" varyByParam="*" /> </outputCacheProfiles> </outputCacheSettings> <sqlCacheDependency ........ ></sqlCacheDependency </caching> my page/site crashes with the following error:- Source: System.Web ---------------------------------------------------------------------------- TargetSite: System.Web.DirectoryMonitor FindDirectoryMonitor(System.String, Boolean, Boolean) ---------------------------------------------------------------------------- Message:System.Web.HttpException: Directory 'C:\Web Sites\My Site Foo - Main Site\Controls\InfoAdvice' does not exist. Failed to start monitoring file changes. at System.Web.FileChangesMonitor.FindDirectoryMonitor(String dir, Boolean addIfNotFound, Boolean throwOnError) at System.Web.FileChangesMonitor.StartMonitoringPath(String alias, FileChangeEventHandler callback, FileAttributesData& fad) at System.Web.Caching.CacheDependency.Init(Boolean isPublic, String[] filenamesArg, String[] cachekeysArg, CacheDependency dependency, DateTime utcStart) at System.Web.Caching.CacheDependency..ctor(Int32 dummy, String[] filenames, DateTime utcStart) at System.Web.Hosting.MapPathBasedVirtualPathProvider.GetCacheDependency(String virtualPath, IEnumerable virtualPathDependencies, DateTime utcStart) at System.Web.ResponseDependencyList.CreateCacheDependency(CacheDependencyType dependencyType, CacheDependency dependency) at System.Web.HttpResponse.CreateCacheDependencyForResponse(CacheDependency dependencyVary) at System.Web.Caching.OutputCacheModule.InsertResponse(HttpResponse response, HttpContext context, String keyRawResponse, HttpCachePolicySettings settings, CachedVary cachedVary, CachedRawResponse memoryRawResponse) at System.Web.Caching.OutputCacheModule.OnLeave(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Ok .. so for some reason, the OutputCache wants a folder/file to be there? Well, i've had this site live for around 3 years and i'm pretty sure that the folders \Controls and \Controls\InfoAdvice doesn't exist on my production server. On my localhost, it sure does .. and contains a large list of ascx controls. But they don't exist on my live server. So ... what is going on here? Can anyone please help? Oh :) Before someone suggests I create those two folders and even stick a random file in those folders .. and have some random text in those random files .. i've done that and it doesn't seem to work, still :( Please Help !

    Read the article

  • Forms Authentication logs out very quickly , locally works fine !!!

    - by user319075
    Hello to all, There's a problem that i am facing with my hosting company, I use a project that uses FormsAuthentication and the problem is that though it successfully logs in, it logs out VERY QUICKLY, and i don't know what could be the cause of that, so in my web.config file i added those lines: <authentication mode="Forms" > <forms name="Nadim" loginUrl="Login.aspx" defaultUrl="Default.aspx" protection="All" path="/" requireSSL="false"/> </authentication> <authorization> <deny users ="?" /> </authorization> <sessionState mode="StateServer" stateConnectionString="tcpip=localhost:42424" cookieless="false" timeout="1440"> </sessionState> and this is the code i use in my custom login page : protected void PasswordCustomValidator_ServerValidate(object source, ServerValidateEventArgs args) { try { UsersSqlDataSource.SelectParameters.Clear(); UsersSqlDataSource.SelectCommand = "Select * From Admins Where AdminID='" + IDTextBox.Text + "' and Password='" + PassTextBox.Text + "'"; UsersSqlDataSource.SelectCommandType = SqlDataSourceCommandType.Text; UsersSqlDataSource.DataSourceMode = SqlDataSourceMode.DataReader; reader = (SqlDataReader)UsersSqlDataSource.Select(DataSourceSelectArguments.Empty); if (reader.HasRows) { reader.Read(); if (RememberCheckBox.Checked == true) Page.Response.Cookies["Admin"].Expires = DateTime.Now.AddDays(5); args.IsValid = true; string userData = "ApplicationSpecific data for this user."; FormsAuthenticationTicket ticket1 = new FormsAuthenticationTicket(1, IDTextBox.Text, System.DateTime.Now, System.DateTime.Now.AddMinutes(30), true, userData, FormsAuthentication.FormsCookiePath); string encTicket = FormsAuthentication.Encrypt(ticket1); Response.Cookies.Add(new HttpCookie(FormsAuthentication.FormsCookieName, encTicket)); Response.Redirect(FormsAuthentication.GetRedirectUrl(IDTextBox.Text, RememberCheckBox.Checked)); //FormsAuthentication.RedirectFromLoginPage(IDTextBox.Text, RememberCheckBox.Checked); } else args.IsValid = false; } catch (SqlException ex) { ErrorLabel.Text = ex.Message; } catch (InvalidOperationException) { args.IsValid = false; } catch (Exception ex) { ErrorLabel.Text = ex.Message; } Also you will find that line of code: FormsAuthentication.RedirectFromLoginPage(IDTextBox.Text, RememberCheckBox.Checked); is commented because i thought there might be something wrong with the ticket when i log in , so i created it manually , every thing i know i tried but nothing worked, so does anyone have any idea what is the problem ? Thanks in advance, Baher.

    Read the article

  • How to inherit from DataAnnotations.ValidationAttribute (it appears SecureCritical under Visual Stud

    - by codetuner
    Hi, I have an [AllowPartiallyTrustedCallers] class library containing subtypes of the System.DataAnnotations.ValidationAttribute. The library is used on contract types of WCF services. In .NET 2/3.5, this worked fine. Since .NET 4.0 however, running a client of the service in the Visual Studio debugger results in the exception "Inheritance security rules violated by type: '(my subtype of ValidationAttribute)'. Derived types must either match the security accessibility of the base type or be less accessible." (System.TypeLoadException) The error appears to occure only when all of the following conditions are met: a subclass of ValidationAttribute is in an AllowPartiallyTrustedCallers assembly reflection is used to check for the attribute the Visual Studio hosting process is enabled (checkbox on Project properties, Debug tab) So basically, in Visual Studio.NET 2010: create a new Console project, add a reference to "System.ComponentModel.DataAnnotations" 4.0.0.0, write the following code: . using System; [assembly: System.Security.AllowPartiallyTrustedCallers()] namespace TestingVaidationAttributeSecurity { public class MyValidationAttribute : System.ComponentModel.DataAnnotations.ValidationAttribute { } [MyValidation] public class FooBar { } class Program { static void Main(string[] args) { Console.WriteLine("ValidationAttribute IsCritical: {0}", typeof(System.ComponentModel.DataAnnotations.ValidationAttribute).IsSecurityCritical); FooBar fb = new FooBar(); fb.GetType().GetCustomAttributes(true); Console.WriteLine("Press enter to end."); Console.ReadLine(); } } } Press F5 and you get the exception ! Press Ctrl-F5 (start without debugging), and it all works fine without exception... The strange thing is that the ValidationAttribute will or will not be securitycritical depending on the way you run the program (F5 or Ctrl+F5). As illustrated by the Console.WriteLine in the above code. But then again, this appear to happen with other attributes (and types?) too. Now the questions... Why do I have this behaviour when inheriting from ValidationAttribute, but not when inheriting from System.Attribute ? (Using Reflector I don't find special settings on the ValidationAttribute class or it's assembly) And what can I do to solve this ? How can I keep MyValidationAttribute inheriting from ValidationAttribute in an AllowPartiallyTrustedCallers assembly without marking it SecurityCritical, still using the new .NET 4 level 2 security model and still have it work using the VS.NET debug host (or other hosts) ?? Thanks a lot! Rudi

    Read the article

  • My ashx stopped working (Works locally, but not online)

    - by Madi D.
    i have a simple ASHX file that returns pictures upon request (mainly a counter), previously the code simply fetched pre-made pictures(already uploaded and available) and sent them to the requester.. i just modified the code,so it would take a base picture, do some modifications on it, save it to the server, then serve it to the user.. tested it locally and it worked like a charm. however when i uploaded the code to my hosting service (Godaddy..) it didnt work. Can someone point the problem out to me? Note: ASHX worked with me before,so i know the web.config and IIS are handling them properly, however i think i am missing something .. Code: <%@ WebHandler Language="C#" Class="NWEmpPhotoHandler" %> using System; using System.Web; using System.IO; using System.Drawing; using System.Drawing.Imaging; public class NWEmpPhotoHandler : IHttpHandler { public bool IsReusable { get { return true; } } public void ProcessRequest(HttpContext ctx) { try { //Some Code to fetch # of vistors from DB int x = 10,000; // # of Vistors, fetched from DB string numberOfVistors = (1000 + x).ToString(); string filePath = ctx.Server.MapPath("counter.jpg"); Bitmap myBitmap = new Bitmap(filePath); Graphics g = Graphics.FromImage(myBitmap); g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; StringFormat strFormat = new StringFormat(); g.DrawString(numberOfVistors , new Font("Tahoma", 24), Brushes.Maroon, new RectangleF(55, 82, 500, 500),null); string PathToSave = ctx.Server.MapPath("counter-" + numberOfVistors + ".jpg"); saveJpeg(PathToSave, myBitmap, 100); if (File.Exists(PathToSave)) { ctx.Response.ContentType = "image/jpg"; ctx.Response.WriteFile(PathToSave); //ctx.Response.OutputStream.Write(picByteArray, 0, picByteArray.Length); } } catch (ArgumentException exe) { } } private ImageCodecInfo getEncoderInfo(string mimeType) { // Get image codecs for all image formats ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders(); // Find the correct image codec for (int i = 0; i < codecs.Length; i++) if (codecs[i].MimeType == mimeType) return codecs[i]; return null; } private void saveJpeg(string path, Bitmap img, long quality) { // Encoder parameter for image quality EncoderParameter qualityParam = new EncoderParameter(Encoder.Quality, quality); // Jpeg image codec ImageCodecInfo jpegCodec = this.getEncoderInfo("image/jpeg"); if (jpegCodec == null) return; EncoderParameters encoderParams = new EncoderParameters(1); encoderParams.Param[0] = qualityParam; img.Save(path, jpegCodec, encoderParams); } }

    Read the article

  • How do you fix an SVN 409 Conflict Error

    - by NerdStarGamer
    I used to use SVN 1.4 on OS X Leopard and everything was fine. A couple of weeks ago I installed a fresh copy of OS X 10.6. The version of SVN that comes with Snow Leopard is 1.6.5. I went ahead and built my own copy with 1.6.6. I'm using the built in apache server and just hosting repositories locally. Everything appeared to work fine until I actually tried to commit something. Everytime I try to commit a change, I get the following message: Transmitting file data .svn: Commit failed (details follow): svn: MERGE of '/svn/svn2': 409 Conflict (http://localhost) This happens with my old repositories, so I created a couple of new ones. Same deal. I also tried using the 1.6.5 version that comes with the system...same. Finally, I tried upgrading to the latest stable SVN (1.6.9) and still got the same problem. The Apache error logs the following for each failed commit: [Mon Mar 29 19:53:10 2010] [error] [client ::1] Could not MERGE resource "/svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1" into "/svn/svn2". [409, #0] [Mon Mar 29 19:53:10 2010] [error] [client ::1] An error occurred while committing the transaction. [409, #2] [Mon Mar 29 19:53:10 2010] [error] [client ::1] Can't open directory '/usr/local/svn/svn2/db/transactions/5-6.txn/\xeb\xa9\x0f\x1f': No such file or directory [409, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Could not DELETE /svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1. [500, #0] [Mon Mar 29 19:53:11 2010] [error] [client ::1] could not open transaction. [500, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Can't open file '/usr/local/svn/svn2/db/transactions/5-6.txn/props': No such file or directory [500, #2] And from the access log: ::1 - - [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 401 401 ::1 - user [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 200 188 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/vcc/default HTTP/1.1" 207 398 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/bln/6 HTTP/1.1" 207 449 ::1 - user [30/Mar/2010:13:02:20 -0400] "REPORT /svn/svn2/!svn/vcc/default HTTP/1.1" 200 1172 Curiously, the commit does actually commit the changes, but the working copy doesn't see that and everything gets screwy. I've tried to Google every variation I can think of for this problem, but the search results are pretty much useless. I'm not using TortoiseSVN or anything special and commits fail on a new repository, so I know it's not a problem with my old repos. Any help would be greatly appreciated.

    Read the article

  • Classic ASP vs. ASP.NET encryption options

    - by harrije
    I'm working on a web site where the new pages are ASP.NET and the legacy pages are Classic ASP. Being new to development in the Windows env, I've been studying the latest technology, i.e. .NET and I become like a deer in headlights when ever legacy issues come up regarding COM objects. Security on the website is an abomination, but I've easily encrypted the connectionStrings in the web.config file per http://www.4guysfromrolla.com/articles/021506-1.aspx based on DPAPI machine mode. I understand this approach is not the most secure, but it's better than nothing which is what it was for the ASP.NET pages. Now, I question how to do similar encryption for the connection strings used by the Classic ASP pages. A complicating factor is that the web sited is hosted where I do not have admin permissions or even command line access, just FTP. Moreover I want to avoid managing the key. My research has found: DPAPI with COM interop. Seems like this should already be available, but the only thing I could find discussing this is CyptoUtility (see http://msdn.microsoft.com/en-us/magazine/cc163884.aspx) which is not installed on the hosting server. There are plenty of other third party COM objects, e.g. Crypto from Dalun Software http://www.dalun.com, but these aren't on the hosted server either, and they look to me to require you to do some kind of key management. There is CAPICOM on the hosted server, but M$ has deprecated it and many report it is not the easiest to use. It is not clear to me whether I can avoid key management with CAPICOM similar to using DPAPI for ASP.NET. If anyone happens to know, please clue me in. I could write an web service in ASP.NET and have the classic ASP pages use it to get the decrypted connection strings and then store those in an application variable. I would not need to use SSL since I could use localhost and nothing would be sent over the internet. In the simpliest form I could implement what someone termed a poor man's version based on a simple XML stream, however, I really was looking to avoid any development since I find it hard to believe there is not a simple solution for Classic ASP like there is for ASP.NET. Maybe I'm missing some options... Recommendations are requested...

    Read the article

  • Good Email Notification Sending Service

    - by Philibert Perusse
    I need to send a few but important email notifications to individual users. For instance, when they register their software I send them a confirmation email. Right now, I am using 'sendmail' from my Perl CGI script to do the job. Most of my automated email are lost or marked as junk. Unfortunately, I am using shared hosting services and not a very good control over the SPF and SenderID DNS records. Even more bad, some other user of that shared server has been infected with some kind of SPAM-BOT and the IP is now blacklisted until further notice! Anyway I just don't want to deal with this kind of headache. I am looking for an online service that I will be able to subscribe to and pay something like 0.10$ per email I send with no monthly fees. I just need and API to be able to send the email from PHP or Perl code I will have to write. I have been looking around at all those "Email Sending Services" and they are all wrapped around creating campains and managing lists for bulk email marketing distribution and newsletters. But remember, I want to send an email notification to a "single" recipient. So far, I have look at MailChimp, SocketLabs, iContact, ConstantContact, StreamSend and so many others to no avail. I have seen one comment at Hackers News saying that MailChimp have an API for transactional e-mails (i.e. ad-hoc ones to welcome a user for example). So you're not just restricted to using them for bulk emails But I cannot find this in the API documentation supplied, maybe this was removed. Any suggestions out there. Here is a summary of my requirements: Allows ad hoc sending of email to a single recipient. Throughput may well be throttle I don't care, i am sending like 2-5 emails a day. API available in PHP or Perl to connect to that web service. Ideally I can send HTML formatted emails, otherwise I will live with text only. Solution not too expensive, between 0.01$ and 0.25$ per email would be acceptable. No recurring monthly fees.

    Read the article

  • Mysql password hashing method old vs new

    - by The Disintegrator
    I'm trying to connect to a mysql server at dreamhost from a php scrip located in a server at slicehost (two different hosting companies). I need to do this so I can transfer new data at slicehost to dreamhost. Using a dump is not an option because the table structures are different and i only need to transfer a small subset of data (100-200 daily records) The problem is that I'm using the new MySQL Password Hashing method at slicehost, and dreamhost uses the old one, So i get $link = mysql_connect($mysqlHost, $mysqlUser, $mysqlPass, FALSE); Warning: mysql_connect() [function.mysql-connect]: OK packet 6 bytes shorter than expected Warning: mysql_connect() [function.mysql-connect]: mysqlnd cannot connect to MySQL 4.1+ using old authentication Warning: mysql_query() [function.mysql-query]: Access denied for user 'nodari'@'localhost' (using password: NO) facts: I need to continue using the new method at slicehost and i can't use an older php version/library The database is too big to transfer it every day with a dump Even if i did this, the tables have different structures I need to copy only a small subset of it, in a daily basis (only the changes of the day, 100-200 records) Since the tables are so different, i need to use php as a bridge to normalize the data Already googled it Already talked to both support stafs The more obvious option to me would be to start using the new MySQL Password Hashing method at dreamhost, but they will not change it and i'm not root so i can't do this myself. Any wild idea? By VolkerK sugestion: mysql> SET SESSION old_passwords=0; Query OK, 0 rows affected (0.01 sec) mysql> SELECT @@global.old_passwords,@@session.old_passwords, Length(PASSWORD('abc')); +------------------------+-------------------------+-------------------------+ | @@global.old_passwords | @@session.old_passwords | Length(PASSWORD('abc')) | +------------------------+-------------------------+-------------------------+ | 1 | 0 | 41 | +------------------------+-------------------------+-------------------------+ 1 row in set (0.00 sec) The obvious thing now would be run a mysql SET GLOBAL old_passwords=0; But i need SUPER privilege to do that and they wont give it to me if I run the query SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); I get the error ERROR 1044 (42000): Access denied for user 'nodari'@'67.205.0.0/255.255.192.0' to database 'mysql' I'm not root... The guy at dreamhost support insist saying thet the problem is at my end. But he said he will run any query I tell him since it's a private server. So, I need to tell this guy EXACTLY what to run. So, telling him to run SET SESSION old_passwords=0; SET GLOBAL old_passwords=0; SET PASSWORD FOR 'nodari'@'HOSTNAME' = PASSWORD('new password'); grant all privileges on *.* to nodari@HOSTNAME identified by 'new password'; would be a good start?

    Read the article

< Previous Page | 203 204 205 206 207 208 209 210 211 212 213 214  | Next Page >