Search Results

Search found 247 results on 10 pages for 'bots'.

Page 2/10 | < Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >

  • Htaccess/robots.txt to allow search bots to explore main domain but not directory on other domain

    - by gX
    Ok, I understand the Title didn't make any sense so here I've tried to explain it in detail. I'm using a hosting that gives me space for my domain and lets me "add on" other domains on it. So lets say I have a domain A, and I add on a domain B. Basically my hosting gives me a public_html where I can put stuff that shows when someone visits website A. But, when I add the domain B, it lets me put the content of B, INSIDE of that public_html so that website B.com can also be visited by going to A.com/siteB... Thats all good, except that Google has started indexing B.com as well as A.com/siteB, I'm ok with it indexing B.com, but I somehow want to prevent it from indexing A.com/siteB so that when people search for B, it doesn't end up showing A.com/siteB. Any ideas? Let me know if the question is still unclear.

    Read the article

  • PHP/MySQL - an array filter for bots

    - by Mike
    Hello, I'm making a hit counter. I have a database and I store the IP and $_SERVER['HTTP_USER_AGENT']; of the visitors. Now I need to add a filter, so I can put away the hits, that are made by bots. I found out, that many bots usually keep some common words in the $_SERVER['HTTP_USER_AGENT']; , so I's like to make and array of words, that would keep the bot from displaying in the results. Here is what I have now: while($row = mysql_fetch_array($yesterday, MYSQL_ASSOC)) { <-- Here I need a code, that would run through an array and check, if it containts the keywords and if it doesn't ... just count++; -- } Also if you know any other way of detecting and removing the bots from the results, I'd be verry thankful. Cheers

    Read the article

  • Should I block bots from my site and why?

    - by Frank E
    My logs are full of bot visitors, often from Eastern Europe and China. The bots are identified as Ahrefs, Seznam, LSSRocketCrawler, Yandex, Sogou and so on. Should I block these bots from my site and why? Which ones have a legitimate purpose in increasing traffic to my site? Many of them are SEO. I have to say I see less traffic if anything since the bots have arrived in large numbers. It would not be too hard to block these since they all admit in their User Agent that they are bots.

    Read the article

  • Video Game Bots?

    - by cam
    Something I've always wondered, especially since it inspired me to start programming when I was a kid, was how video game bots work? I'm sure there are a lot of different methods, but what about automation for MMORPGs? Or even FPS-type bots?

    Read the article

  • Tell bots apart from human visitors for stats?

    - by Pekka
    I am looking to roll my own simple web stats script. The only major obstacle on the road, as far as I can see, is telling human visitors apart from bots. I would like to have a solution for that which I don't need to maintain on a regular basis (i.e. I don't want to update text files with bot-related User-agents). Is there any open service that does that, like Akismet does for spam? Or is there a PHP project that is dedicated to recognizing spiders and bots and provides frequent updates? To clarify: I'm not looking to block bots. I do not need 100% watertight results. I just want to exclude as many as I can from my stats. In know that parsing the user-Agent is an option but maintaining the patterns to parse for is a lot of work. My question is whether there is any project or service that does that already. Bounty: I thought I'd push this as a reference question on the topic. The best / most original / most technically viable contribution will receive the bounty amount.

    Read the article

  • How do you stop scripters from slamming your website hundreds of times a second?

    - by davebug
    [update] I've accepted an answer, as lc deserves the bounty due to the well thought-out answer, but sadly, I believe we're stuck with our original worst case scenario: CAPTCHA everyone on purchase attempts of the crap. Short explanation: caching / web farms make it impossible for us to actually track hits, and any workaround (sending a non-cached web-beacon, writing to a unified table, etc.) slows the site down worse than the bots would. There is likely some pricey bit of hardware from Cisco or the like that can help at a high level, but it's hard to justify the cost if CAPTCHAing everyone is an alternative. I'll attempt to do a more full explanation in here later, as well as cleaning this up for future searchers (though others are welcome to try, as it's community wiki). I've added bounty to this question and attempted to explain why the current answers don't fit our needs. First, though, thanks to all of you who have thought about this, it's amazing to have this collective intelligence to help work through seemingly impossible problems. I'll be a little more clear than I was before: This is about the bag o' crap sales on woot.com. I'm the president of Woot Workshop, the subsidiary of Woot that does the design, writes the product descriptions, podcasts, blog posts, and moderates the forums. I work in the css/html world and am only barely familiar with the rest of the developer world. I work closely with the developers and have talked through all of the answers here (and many other ideas we've had). Usability of the site is a massive part of my job, and making the site exciting and fun is most of the rest of it. That's where the three goals below derive. CAPTCHA harms usability, and bots steal the fun and excitement out of our crap sales. To set up the scenario a little more, bots are slamming our front page tens of times a second screenscraping (and/or scanning our rss) for the Random Crap sale. The moment they see that, it triggers a second stage of the program that logs in, clicks I want One, fills out the form, and buys the crap. In current (2/6/2009) order of votes: lc: On stackoverflow and other sites that use this method, they're almost always dealing with authenticated (logged in) users, because the task being attempted requires that. On Woot, anonymous (non-logged) users can view our home page. In other words, the slamming bots can be non-authenticated (and essentially non-trackable except by IP address). So we're back to scanning for IPs, which a) is fairly useless in this age of cloud networking and spambot zombies and b) catches too many innocents given the number of businesses that come from one IP address (not to mention the issues with non-static IP ISPs and potential performance hits to trying to track this). Oh, and having people call us would be the worst possible scenario. Can we have them call you? BradC Ned Batchelder's methods look pretty cool, but they're pretty firmly designed to defeat bots built for a network of sites. Our problem is bots are built specifically to defeat our site. Some of these methods could likely work for a short time until the scripters evolved their bots to ignore the honeypot, screenscrape for nearby label names instead of form ids, and use a javascript-capable browser control. lc again "Unless, of course, the hype is part of you

    Read the article

  • Preventing spam bots on site?

    - by Mike
    We're having an issue on one of our fairly large websites with spam bots. It appears the bots are creating user accounts and then posting journal entries which lead to various spam links. It appears they are bypassing our captcha somehow -- either it's been cracked or they're using another method to create accounts. We're looking to do email activation for the accounts, but we're about a week away from implementing such changes (due to busy schedules). However, I don't feel like this will be enough if they're using an SQL exploit somewhere on the site and doing the whole cross site scripting thing. So my question to you: If they are using some kind of XSS exploit, how can I find it? I'm securing statements where I can but, again, its a fairly large site and it'd take me awhile to actively clean up SQL statements to prevent XSS. Can you recommend anything to help our situation?

    Read the article

  • Bots see something different!?

    - by ilhan
    I've submitted my web site to different apps like YahooWebmasters and similar places. They see my web site's main page's title as Indef of/ . However I see it normally, as My Title. Server: it says Apashi (wtf!?), it is Apache in reality PHP 5.2.5 FreeBSD cPanel Version 11.24.4-RELEASE Kernel version 6.3-PRERELEASE main page: index.html I guess it is because of index.html But why?

    Read the article

  • Searchengine bots and meta refresh for disabled Javascript

    - by Jonathan
    Hi! I have a website that must have javascript turned on so it can work there is a < noscript tag that have a meta to redirect the user to a page that alerts him about the disabled javascript... I am wondering, is this a bad thing for search engine crawlers? Because I send an e-mail to myself when someone doesn't have js so I can analyze if its necessary to rebuild the website for these people, but its 100% js activated and the only ones that doesn't have JS are searchengines crawlers... I guess google, yahoo etc doesn't take the meta refresh seriously when inside a < noscript ? Should I do something to check if they are bots and do not redirect them with meta? Thanks, Joe

    Read the article

  • How Search Engine Bots Crawl Forums?

    - by Waleed Eissa
    If I have a forums site with a large number of threads, will the search engine bot crawl the whole site every time? Say I have over 1,000,000 threads in my site, will they get crawled every time the bot crawls my site? or how does it work? I want my website to be indexed but I don't want the bot to kill my website! In other words I don't want the bot to keep crawling the old threads again and again every time it crawls my website. Also, what about the pages crawled before? Will the bot request them every time it crawls my website to make sure they are still on the site? I'm asking this because I only link to the latest threads, i.e. there's a page that contains a list of all the latest threads, but I don't link to the older threads, they have to be explicitly requested by URL, e.g. http://www.mysite.com/showthread.aspx?threadid=7 , will this work to stop the bot from bringing my site down and consuming all my bandwidth? P.S. The site is still under development but I want to know in order to design the site so that search engine bots don't bring it down. Thanks

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • How to block this URL pattern in Varnish VCL?

    - by iTech
    My website is getting badly hit by spambots and scrappers, I am using Cloudflare but the problem still remains there. The problem is spambots accessing non-existing urls causing a lot of load to my drupal backend which goes all the way and bootstraps db just to serve a 404 error doc. I cant simply dish out non-drupal 404's for all page not found errors, as I need to have drupal catch them. Since, varnish is in front it can check if the bot is acting nice and asking for valid url - if not it servers them a 404 or 403. These bots are causing errors using this pattern : http://www.megaleecher.net/http:/www.megaleecher.net/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_S/Using_iPhone_As_USB_Mass_Storage Now, pls. suggest a regex varnbisg VCL directive which catches this URL pattern and serves a 404 error from varnish, preventing it from reaching apache/drupal ?

    Read the article

  • In addition to Google's First Flick Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • In addition to Google's First Click Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • /phpTest/zologize/axa.php? Another botnet?

    - by M132
    Starring at the log made me think, what is /phpTest/zologize/axa.php and why are bots looking for it? Previously, I had lots of /HNAP1/ requests. Requesting /HNAP1/ from IPs from log revealed, that all of them were sent by Linksys routers. 3 months later, these requests turned out to be generated by a router worm called TheMoon. But requesting /phpTest/zologize/axa.php from these servers returns a 404 error. How these servers got infected, and how can I protect mine from this? 124.11.224.69 - - [02/Feb/2014:00:37:16 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 140.113.238.121 - - [21/Feb/2014:01:24:32 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 77.121.132.79 - - [22/Feb/2014:00:03:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 142.4.201.210 - - [24/Feb/2014:21:54:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 212.83.168.39 - - [24/Feb/2014:23:16:00 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 168 "-" "-" 87.117.229.210 - - [26/Feb/2014:06:34:58 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 78.100.82.99 - - [26/Feb/2014:08:25:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 198.50.205.219 - - [26/Feb/2014:09:59:11 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 210.60.142.107 - - [27/Feb/2014:00:12:12 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 101.109.4.73 - - [27/Feb/2014:08:50:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.91.128.158 - - [27/Feb/2014:08:59:15 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 201.188.41.175 - - [27/Feb/2014:11:25:42 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 220.133.137.2 - - [27/Feb/2014:12:12:46 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 203.156.104.88 - - [28/Feb/2014:18:11:49 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 61.19.52.58 - - [28/Feb/2014:22:02:56 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 84.2.92.40 - - [28/Feb/2014:23:04:17 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 404 162 "-" "-" 58.64.205.11 - - [01/Mar/2014:06:08:33 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 113.61.200.151 - - [01/Mar/2014:18:25:25 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 178.33.219.12 - - [03/Mar/2014:14:41:48 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 74.63.220.132 - - [04/Mar/2014:01:16:44 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 187.141.230.106 - - [04/Mar/2014:15:39:26 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 21 "-" "-" 103.22.181.146 - - [09/May/2014:17:16:56 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 502 166 "-" "-" 176.31.200.14 - - [10/May/2014:19:52:24 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 124.120.92.70 - - [12/May/2014:16:19:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 68 "-" "-" 219.85.198.142 - - [15/May/2014:19:21:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 80.84.53.226 - - [23/May/2014:08:58:25 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 87.213.11.165 - - [25/May/2014:06:20:27 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 122.116.220.106 - - [25/May/2014:07:10:21 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.128.30 - - [29/May/2014:02:43:49 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 142.4.197.135 - - [29/May/2014:11:36:45 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.243.65 - - [30/May/2014:01:59:53 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 58.8.164.221 - - [30/May/2014:14:04:16 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 140.127.182.15 - - [01/Jun/2014:14:45:40 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 218.166.43.21 - - [01/Jun/2014:16:07:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 178.32.188.140 - - [01/Jun/2014:19:11:46 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 94.23.211.173 - - [05/Jun/2014:00:52:52 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 120.117.105.201 - - [05/Jun/2014:04:39:39 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 187.172.27.146 - - [05/Jun/2014:10:20:22 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-" 203.195.219.91 - - [05/Jun/2014:10:53:42 +0200] "GET /phpTest/zologize/axa.php HTTP/1.1" 200 37 "-" "-"

    Read the article

  • IIS Request Filtering Rule for User Agent

    - by alexp
    I'm trying to block requests from a certain bot. I've added a request filtering rule, but I know it is still hitting the site because it shows up in Google Analytics. Here is the filtering rule I added: <security> <requestFiltering> <filteringRules> <filteringRule name="Block GomezAgent" scanUrl="false" scanQueryString="false"> <scanHeaders> <add requestHeader="User-Agent" /> </scanHeaders> <denyStrings> <add string="GomezAgent+3.0" /> </denyStrings> </filteringRule> </filteringRules> </requestFiltering> </security> This is an example of the user agent I'm trying to block. Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:13.0;+GomezAgent+3.0)+Gecko/20100101+Firefox/13.0.1 In some ways it seems to work. If I use Chrome to spoof my user agent, I get a 404, as expected. But the bot traffic is still showing up in my analytics. What am I missing?

    Read the article

  • Is there a list of known web crawlers?

    - by J. Pablo Fernández
    I'm trying to get accurate download numbers for some files on a web server. I look at the user agents and some are clearly bots or web crawlers, but many for many I'm not sure, they may or may not be a web crawler and they are causing many downloads so it's important for me to know. Is there somewhere a list of know web crawlers with some documentation like user agent, IPs, behavior, etc? I'm not interested in the official ones, like Google's, Yahoo's, or Microsoft's. Those are generally well behaved and self-indentified.

    Read the article

  • Appengine Apps Vs Google bot web crawler

    - by sandeep koduri
    i built an appengine web app cricket.hover.in. The web app consists of about 15k url's linked in it, But even after a long time of my launch, no pages are indexed on google. Any base link place on my root site hover.in are being indexed with in minutes. but i placed the same link home page of root site a long back. but its of no use. can any one analyse , if there is any issue with cricket.hover.in or if bots have any issues with Google app engine actually tested the url using labs app of webmaster tools of google there the return is fine and html is clear. but when tested the same (cricket.hover.in) at the following urls its showing different results of failure www.dnsqueries.com/en/googlebot_simulator.php www.smart-it-consulting.com/internet/google/googlebot-spoofer/ but if i test some of my php or word press links at the above url's the results are good and fine. please help me with this.

    Read the article

  • What dangers await if I block non-standard, non-major-usa search engine bots from my USA only website?

    - by Ryan
    I noticed tons of bandwidth being used by non-USA search engine bots, so I began blocking them in an effort to save bandwidth and cpu cycles for actual users and the search engines they come from (Google, Bing, Yahoo, Ask, etc.). Other than potentially losing some international traffic (which isn't really important to us since all of our content is very USA-centric), what additional dangers should I be concerned about? I'm using a modified version of Jeff Starr's User Agent Blocklist

    Read the article

  • Rails form protection questions, hidden field

    - by user284194
    I have a live rails website and I want to have a form with a lot of fields on it. I have set up validations and allowed formatting for every field. I've tested it quite a bit and it seems to catch anything I throw at it. I think it's almost ready to go live, but I want to quadruple check if there's anything else I should do to protect it. My site has a low volume of visitors, but I want it to be a safe as possible. I'd like to avoid using a captcha if I can. I've read that you can use a hidden field to protect forms against bots. Do people recommend this instead of using a captcha, or even using it with a captcha? my form is really standard: <% form_for(@entry) do |f| %> ... <%= f.submit 'Create' %> <% end %> Any suggestions or code samples would be greatly appreciated.

    Read the article

  • What makes it hard to protect from hacks/bots in BF3 and Quake Live?

    - by Jakub P.
    After playing these games, asking other players/admins, and reading online I am led to believe that Quake Live and Battlefield 3 are frequented by bots and there are plenty of hacks of various kinds. I'm confused how this is possible, or even easy seeing how many players have access to these kinds of "tools" (sic). Isn't it possible for the game authors to digitally sign the game executables so that when they run, the server can ensure only the allowed client is sending commands, thus preventing any kind of abuse? I.e. every player command would be signed by a private key, or symmetrically encrypted (not sure which would make more sense). I understand that players can look at the running executable's behavior (memory etc.), but if games are apparently so easy to hack, shouldn't most apps be hacked as well (e.g. Skype, all DRM running on Windows etc.)?

    Read the article

  • Unknown Apache2 + PHP5 FastCGI 500 error .. caused by search engine bots?

    - by rdjurovich
    My Ubuntu server is configured with Apache 2.2.8 and PHP 5.2.4-2ubuntu5.18 in FastCGI mode. Everything works well, except I am seeing 500 errors that only seem to come from bots accessing the server.. for example (access.log): x.125.71.104 - - [16/Nov/2011:10:27:39 +1100] "GET / HTTP/1.1" 500 41377 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" x.40.103.239 - - [16/Nov/2011:11:05:56 +1100] "GET / HTTP/1.0" 500 14717 "-" "Mozilla/5.0 (compatible; mon.itor.us - free monitoring service; http://mon.itor.us)" x.249.67.114 - - [14/Nov/2011:20:57:17 +1100] "GET / HTTP/1.1" 500 101 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" x.55.39.85 - - [14/Nov/2011:19:31:06 +1100] "GET / HTTP/1.1" 500 7032 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)._" It is my understanding that a 500 error will be thrown when the PHP process fails to respond to Apache, which could be caused by a fatal PHP error or if PHP runs out of processes.. so my assumption is that either the bots are hitting the server too hard, killing the PHP processes, or something in the request header from bots is causing a fatal error in my PHP script? If anyone can offer advice on this it would be greatly appreciated! Ryan

    Read the article

  • How does XMPP work with perl?

    - by TheGNUGuy
    Hey everybody, I am trying to make my own jabber bot but i have run into a little trouble. I have gotten my bot to respond to messages, however, if I try to change the bot's presence then it seems as though all of the messages you send to the bot get delayed. What I mean is when I run the script I change the presence so I can see that it is online. Then When I send it a message it takes 3 before the callback subroutine i have set up for messages gets called. After the 3rd message is sent and the chat subroutine is called it still process the first message I sent. This really doesn't pose TOO much of a problem except that I have it set up to log out when I send the message "logout" and it has to be followed by two more messages in order to log out. I am not sure what it is that I have to do to fix this but i think it has something to do with iq packets because I have an iq callback set as well and it gets called 2 times after setting the presence. Here is my source code: http://pastebin.com/MgKMhTML Thanks for your help!

    Read the article

  • How do I correctly shutdown a Bot::BasicBot bot (based on POE::Component::IRC)?

    - by rarbox
    This is a sample script. When I hit Ctrl+C, the bot quits IRC but it reconnects back after some time. How do I shut down the bot correctly? #!/usr/bin/perl package main; my $bot = Perlbot->new (server => 'irc.dal.net'); $SIG{'INT'} = 'Handler'; $SIG{'TERM'} = 'Handler'; sub Handler { print "\nShutting down bot...\n"; $bot->shutdown('Killed.'); }; $bot->run; package Perlbot; use base qw(Bot::BasicBot); sub connected { my $self = shift; $self->join('#codetestchan'); }

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10  | Next Page >