Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 153/201 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Random “Lost connection to MySQL server at 'reading initial communication packet', system error: 0”

    - by user1606545
    Sometimes I get the error from MYSQL server: Lost connection to MySQL server at 'reading initial communication packet', system error: 0 I cannot find the cause, since most of the time it works, but every week for some hours I get this error. I googled, but there seem to be only users which have this error permanently. But in this case, it only occurs sometimes. I checked hosts.allow and hosts.deny, but the host is allowed and not denied. Also sometimes I get the error: File './database/table.MYD' not found (Errcode: 24) It occurs very rarely. But it occurs for some hours once a week, sometimes on multiple days, but suddenly the problem disappears again. I have checked the open files limit. It's 2048 and should be absolutely enough. I also tried to increase the number of open files nevertheless, but no effect. I thought, perhaps the process does not close some tables. But this is impossible, because after a while everythings o.k. again and the process opens maximum 100 tables at once. I also checked the MySQL-runtime-environment, and there were 930 opened files. I cannot explain that. After a while it's 129. I am running a MySQL-Server on a SUSE-Linux machine. I connect to the MySQL-Server from another host by the command line tool "mysql" and by MySQL-C-connector. The MySQL-Server is version 5.0.67.

    Read the article

  • INSERT DELAYED on locked tables blocks PHP processes to continue

    - by sw0x2A
    Our webservers write some tracking information into a MySQL database (using INSERT DELAYED into MyISAM table). When a huge SELECT query is executed on this table or when it is locked for another reason, the webserver processes (with INSERT DELAYED) are waiting for the database and in some cases the MaxServer limit is reached in Apaches, so they will stop serving requests. We use INSERT DELAYED because The DELAYED option for the INSERT statement is a MySQL extension to standard SQL that is very useful if you have clients that cannot or need not wait for the INSERT to complete. This is a common situation when you use MySQL for logging and you also periodically run SELECT and UPDATE statements that take a long time to complete. Quote from MySQL documentation. I am wondering why the Apache processes are waiting for the INSERT DELAYED to finish. And what can I do to just send the data and forget about it. (Since this is logging data, I do not care if we lose some entries.) Even when the table is locked the PHP script should just go on and should not wait for an answer of MySQL. (We do not want to setup Master-slave for this table but we are thinking about move this data to some NoSQL database. But for now I would like to know why INSERT DELAYED is not working as expected.)

    Read the article

  • virtual host settings fail on multiple sites

    - by Ricalsin
    Wow. I'm puzzled. On my ubuntu system I've setup an apache2 server and configured three virtual hosts in the /etc/apache2/sites-available directory. a2ensite to symbolic link the sites-enabled. The first two work great; a simple url of localhost.mysitenames.com works great for the first two sites, both finding their DocumentRoot and Directory paths. The third always generates a Bad Request (Invalid Hostname) response. No server error.log as it never hits it. I've copied/pasted the working vhost files, made the minor changes to the ServerName, DocumentRoot and Directory and the same problem persists. I always "sudo /etc/init.d/apache2 restart" whenever I make a change. I've cleared the browser cache as well. No love. There's not a limit to the number of sites you can host, right? My goal was a localhost development environment with the expectation I can run any number of websites locally before pushing them to a live server. Any thoughts on how to debug this? Or, just a simple solution I am missing?

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • How can I determine the IP addresses allocated by DHCP on a router that I'm connected to?

    - by user234831
    This "router" is not a typical situation. I'm using my phone as a hotspot and can only configure a select number of DHCP options. I can manage the limit on how many devices/clients can use my phone as a hotspot. I have to select from a radio-button list with the options: 2,3,4,5, or 8 I can specify the DHCP starting IP address. In this case, it begins at 192.168.6.106 When I'm connected via WIFI to my phone, an ipconfig /all command shows me that the default gateway is 192.168.6.1 and my IPv4 address is 192.168.1.148. I have the luxury of connecting another device to the phone and that device was assigned 192.168.1.121. I've tried connecting to 192.168.6.1, hoping for some sort of router setup page that I'm used to seeing, but there is no such thing or maybe it's just a matter of incompatable operating systems. In summary, the "router" (phone) has an IP address of 192.168.6.1 and a DHCP server that begins at 192.168.6.106 and allows up to 8 connections. Normally, I would assume a range of 192.168.6.106 - 192.168.6.113, but connected clients are showing otherwise. How can I figure out which IP addresses are set aside by DHCP for clients?

    Read the article

  • after building in more ram, bios/debian does nothing [closed]

    - by derty
    My private server has 2x1gb Ram working with a 64bit Debian and an Q6600 Intel. This runs 2 virtual mashines on it each one recives 512mb RAM. Which you can immagine is a bit tight for the hole system. Now i got 2x2gb ram from a friend. I'm not sure if thery are clocking at the same speed, but i'm sure my power adaptor is not on his limit and can handle that. So there are 2 ram sockets left at the mainboard. I shuted down the system and build in the 4 gigs and looked what happen. After pressing the start button, everything gets noisy as known BUT the screen shows nothing, not even the bios stuff it usally does. Why isn't the system booting? I can immagine that it does not direkt link between booting debian and the bios not showing a thing. Or is it Grub? I mounted the system disk, and i can see it, switch folders write stuff with "vim" it does not seems like there is a problem.

    Read the article

  • Does image block (firefox addon) save internet bandwidth usage?

    - by dkjain
    Does image block save internet bandwidth usage. I have a data capped plan from my ISP ( 5GB at 2mbps and thereafter 256 kpbs / pm). I doubt if the addon or other similar addon actually saves bandwidht. Here is my point of view, pls correct if that is wrong. When a request is sent to the server, the server sends out whatever page it's requested to serve with all its text and images etc. So essentially my ISP has made his pipe available for the data to reach me thus he would count those bytes under my data plan. When the data arrives it's all first stored to my browser cache (folder) area which means all the data has actually been received by me/computer using my ISP's pipe. The browser then fetches those data from the cache and displays it. By hitting the stop button or blocking images via ur addon I am just choosing not to display the data which would remain in the cache or eventually be discarded if still on the network pipe after a timeout limit. The point is the data request have been completed by the ISP and so the data would be metered and thus using addon such as image block or hitting stop button while page is loading does not in any way save internet bandwidth. Your comments plz....... Regards dk.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Many clients on a wireless AP for UDP broadcast packets

    - by distorteddisco
    I asked this question on StackOverflow and was directed over here, so I'd appreciate any advice. I'm deploying a smartphone application as part of a live music performance that depends on receiving UDP broadcast packets from a wireless access point. I'm guessing that between 20 and 50 clients will be connected at any one time. I'm aware that a maximum of 20 clients per access point is advised, but as the UDP broadcast packets are ground through the LAN, how would I be able to link multiple APs together? I'm looking for recommendations on a suitable AP for this. The actual data transmission rates are very low - only a few kB/s - as I'm just sending small messages to the smartphone apps, and there will be no WAN internet connection. I tried it with a few connected peers on an adhoc wireless connection without any problems, but ran into dropped packet issues on an old WRT54G running ddwrt, though it's in pretty rough shape. What's the best way to do this? I suppose I could limit concurrent wireless connections to 20 clients... but more would be nice. EDIT: I should also say that it's purely one-way communication; the smartphone application is only receiving broadcast packets, not sending anything.

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Losing SQL connections

    - by john pavelka
    sql servr 2005 - Standard; one dedicated sql server (VM); windows server 2003; Small databases; About once a week we lose all sql connections. It seems to fix itself after about 5-10 minutes. System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. We don't have a fully qualified DBA; it's kind of a joint effort here. Can somebody give me some general ideas for troubleshooting the network side and the application side? We already ran a few tuning profiles and ran through Database Tuning Advisor to apply indexing recommendations. It would sure be nice if there was a way to take a snapshot of what was running on sql server when these 100% cpu spikes occured, but sometimes we're not around. Is it common to throttle CPU for certain processes? Can this be done with Windows server 2003? For example, if security apps were making cpu spike to 100%, is there a way to limit their cpu usage? Any advice is appreciated. thanks,

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Gentoo server time issue, can't manually set time, NTPD won't correct, too big of a time difference

    - by kevingreen
    So, my server is living in the future, unfortunately I can't get lottery numbers, or stock picks out of it. It thinks this is the time: Thu Nov 7 04:07:18 EST 2013 Not correct, I tried to set the time manually via date in a few ways # date -s "06 NOV 2013 14:48:00" # date 110614482013 -- same output, same problem Which outputs Wed Nov 6 14:48:00 EST 2013, but when I check the date again, it's still set to Nov 7th 0400 or whatever. I checked my system messages, and I see this pop up often: Nov 7 03:54:00 www ntpd[4482]: time correction of -47927 seconds exceeds sanity limit (1000); set clock manually to the correct UTC time. Which makes sense, we're way off the correct time. But I can't seem to manually fix it. So now what? Also, I'm wondering if my hardware clock is setup correct, hwclock doesn't return any values. Would that be causing issues? This is a virtualized server, I don't have direct access to the hypervisor, but I can talk to who does, assume I can explain myself well enough. Thanks

    Read the article

  • What is the best hosting option for Flash web-widget?

    - by par
    Our Flash web-widget has got highly popular. It is downloaded around 100,000 times per day. And that is the problem. Our server bandwidth is too narrow to deliver the widget to the clients fast. The widget is loaded very slow. Probably 20 times slower than before (at peak times). Probably I have choosen not the right hoster for my task - delivering 1 MB Flash widget to 100,000 users per day. What is the best hosting solution in my case? I'm not good at server administration so forgive me if I sound naive. The details are the following. Our hoster options: -Dedicated server, Ubuntu -10 Mbit Connection -monthly bandwidth limit: 2000 GB Widget size is 1 MB. The widget consists of the main SWF and a number of loaded SWF and data files. This is a part of Apache Status report taken right now ---- Server uptime: 1 hour 2 minutes 38 seconds Total accesses: 74865 - Total Traffic: 5.8 GB CPU Usage: u28 s7.78 cu0 cs0 - .952% CPU load 19.9 requests/sec - 1.6 MB/second - 81.1 kB/request 200 requests currently being processed, 0 idle workers WWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWCWWWWWWWCWWWW WWWWWCWWWWWWWWWWWWWWCWWWWWWWWWWWWCWWWWWWWWCWWCWWWWWWWWWWWWWWWWWW WWWWWWWWWWWWWWWWCWWWWWWWWWWWWWWWWWWWWWWWWWWWCWCWWWWWWWWWWWWWWWCW WWWWWWWW........................................................ ----

    Read the article

  • php-cgi.exe Taking out server, multiple running

    - by Alex
    I have been using ZendServer CE for over a year and have never had a problem. Recently, about a week or two ago I have found my server to be acting up and even causing RDP to be un-connectable. After some looking around I have 20, 25, 30+ php-cgi.exe running. With my IIS7 service starting with Windows once my server started all these php-cgi.exe would start running (even though the limit is 10) and I could not even connect to it. After disabling the Web Server as startup which stops php-cgi.exe from running the server runs flawless, like it always has. As soon as I run the web server all these odd issues start. I have a post over at Zend http://forums.zend.com/viewtopic.php?f=44&t=41043&p=95133 where I was told to update my Zend install. After doing so this issue has not gone away. Even running 1 php-cgi.exe (somehow 2 start anyway) the server begins to go silly. The first issue I find myself with running php-cgi.exe is that Windows Services, weather be stock or using FireDaemon begin to lag, slow start, crash, etc. If anyone can help me with this I would GREATLY appreciate it. At this time I am forced to look for a alternative to running PHP other than cgi as it simply takes out the whole box. On another note, I run this same version of Zend on a similar server with no issues. Starting to think its a IIS issue. (UPDATE) Installed newest version of PHP, separate from Zend, same issue. Server Specs: Intel Xeon Quad w HT Nehlam Based 24GB DDR3 1333 2x1TB Raid Mirror OS 2x1TB Raid Mirror (Other) 4x2TB Raid 5 (Storage) Server 2008 R2

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • running a laptop continuously

    - by intuited
    I have an experienced laptop — a Dell Latitude D400, with a Pentium M CPU — that I'd like to run as an always-on server. This model was launched in 2004; I got mine second-hand in about 2007. I've heard that continuous operation is generally not a good idea with consumer hardware, but am lacking in specific knowledge about related problems, and have little idea of how much such usage patterns would reduce the lifespan of the machine. I'm mostly concerned with the unit's core components; parts such as the hard drive which are readily replaceable are, well, readily replaceable. What sorts of things can I do to increase the lifespan of this machine under such circumstances? For example, I'm guessing that it would be wise to limit the CPU frequency or take other steps to keep the internal temperature low. However, I'm not sure where the point of diminishing returns would lie with such an approach — 50°C? 40°C? Would it be useful to suspend the machine periodically, for perhaps an hour each day, or a few hours each week?

    Read the article

  • Is 40+ Logons on Exchange 2003 per user normal?

    - by cbsch
    Hello! We've had a problem at work where users sometimes randomly can't connect to exchange. I've found out that it's because they reached the limit of 32 concurrent logons. I increased the maximum allowed connections by adding the key "Maximum Allowed Sessions Per User" in HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem. But I'm not sure if this is a real good fix. Looking at the logons some users has as many as 15 logons with the exact same logon time. I know for sure that Outlook 2007 does this, as I was watching them while a user connected with Outlook after a restart on the Exchange service. Every user also has an iPhone connected to exchange, I don't know if these cause the same thing. Is this normal? Could there be a bug in the software? (The Outlook 2007 has nothing configured, except added the user, pure vanilla installs). The users are mobile, and when Outlook generates up to 15 connection every time it connects, and I've read (no sources, sorry) that Outlook doesn't time out connections before 2 hours. I might have to set this number real high to prevent it from being a problem.

    Read the article

  • How best to handle end user notification in the event of system failure incl. email?

    - by BrianLy
    I've been asked to research ways of handling end user notifications when systems such as email are experiencing problems. Perhaps an example will make this a little clearer. We have a number of sites in different countries. Recently email was impacted at one of the sites, but it could have been a complete network outage. Information was provided by phone to local IT managers at the site but onward communication was slower than some would have liked. It seems like almost everyone at the site has a personal mobile phone which could receive text messages, and perhaps access a remote website with postings on the situation. However managing and supporting a system to text people on these relatively infrequent occasions would be very costly to do internally. What are other people doing to handle situations like this? Some things I've thought of include: Database of phone numbers to text. Seems costly and not very easy to maintain for an already stretched IT group. Is there an external service that would let you do this policies? Send voicemail message to all phones on site. Maintain an external website. This would not work in all situations (network failure), and there is a limit on the amount of info that can be posted externally. A site outage could be sensitive information in some situations. How could the site be password protected? Maybe OpenId/Facebook connect would work. Use a site like Yammer.com which is publicly accessible but only by people with a company email address. Anyone using this for IT outage notifications? To me it looks like there is no clear answer, and that there are solutions for some subsets of users. To be comprehensive a number of solutions would need to be combined. Any additional thoughts or recommendations? What worked or didn't work for your organization?

    Read the article

  • Connections to IIS sometimes get stuck in CLOSE_WAIT state

    - by randomhuman
    Our application includes an ASP.Net web service that only needs to deal with a handful of clients. As such, the 10 incoming connection limit of Windows XP Pro is generally not a problem. However, on one particular server, connections are occasionally becoming stuck in the CLOSE_WAIT state. These connections build up over time and eventually new client connections are refused because the maximum number of connections are used up. From my googling it sounds like a failure of the webservice to properly close the connection can cause this problem, but as it works just fine on hundreds of other Windows XP pro machines I can't see it being a bug in our code. It also ran fine on the affected machine until some shenanigans on the part of the end user (I think they set about deleting duplicate files in order to reduce their disk usage, but they did not exactly come clean about it). What could the user have changed to introduce this problem? Is there any way I can force connections that are in CLOSE_WAIT to time out rather than letting them hang around? I have seen suggestions to reduce TcpTimedWaitDelay, but that only relates to the TIME_WAIT state, and changing it did not have any effect.

    Read the article

  • Random HTTP 413 error on apach2/php/joomla site

    - by jfab
    I have a Joomla site, and every once in a while when I submit something via a form, I get a HTTP 413 error: Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the error.log file I get: Invalid Content-Length, referer: [site]/index.php It doesn't seem this has anything to do with the actual size of the request, for the following reasons: a) I tinkered with the configuration of both Apache, and PHP. In Apache I tried increasing LimitRequestBody, and in PHP post_max_size, max_input_vars, memory_limit, and even upload_max_filesize. Every value is far beyond what is sent in a typical request that generates an error. b) The error pops up quite randomly, and often just hitting refresh allows me to get through. c) I checked the request in Fiddler to make sure everything is right with the content-length stated in the header, and the content of the request itself. Everything appears to be in order. A curious thing is that when I resent the exact same request via Fiddler, I never got the error. It seems I can only recreate it through a browser. So I'm at my wit's end here. I don't even know where to look for the problem anymore. I don't know if it's Apache or PHP (though I can't find anything in PHP error logs, so maybe that means Apache is the more likely culprit?), or PHP in general, or my Joomla site in particular (my bets were on Joomla until a recreated the error on a test script, with a very basic post form, though it does pop up much more often on the Joomla site). If anyone can give any advice on where to even begin with this, I'll be very grateful!

    Read the article

  • Killing a process which ran for a lot of time or is using a lot of memory

    - by Vedant Terkar
    I am not sure whether this question belong to Stack Overflow or here, but here we go. I am designing a online 'C' compiler, which will compile and invoke the program if compilation succeeded. So here is code which I am using for that: $str=shell_exec("gcc path/to/file.c -o path/to/file.exe 2>&1"); if(file_exists("path/to/file.exe")){ $res=shell_exec("path/to/file.exe <inputfile 2>&1"); echo $res; } This Seems to work fine with simple program files. But When file.c That is the source code entered contains Infinite loop then This script crashes the server and utilizes a lot of memory and time. So here is my question: Is There any way to detect for how much time does the process file.exe is Running? How Much Space is Utilized by that process that is file.exe? Is There any way to kill the process file.exe if space and time utilization increases beyond certain limit? That Mean if we allocate time of 2.5sec and space of 40Mb at max for that process file.exe and if any one of those 2 constraints is violated then we should display appropriate error message to client Is it possible? I am Using WAMP (Windows 7).

    Read the article

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >