Search Results

Search found 2618 results on 105 pages for 'registration scheduling'.

Page 61/105 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • 613 threads limit on debian

    - by Joel
    When running this program thread-limit.c on my dedicated debian server, the output says that my system can't create more than around 600 threads. I need to create more threads, and fix my system misconfiguration. Here are a few informations about my dedicated server: de801:/# uname -a Linux de801.ispfr.net 2.6.18-028stab085.5 #1 SMP Thu Apr 14 15:06:33 MSD 2011 x86_64 GNU/Linux de801:/# java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode) de801:/# ldd $(which java) linux-vdso.so.1 => (0x00007fffbc3fd000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af013225000) libjli.so => /usr/lib/jvm/java-6-sun-1.6.0.26/jre/bin/../lib/amd64/jli/libjli.so (0x00002af013441000) libdl.so.2 => /lib/libdl.so.2 (0x00002af01354b000) libc.so.6 => /lib/libc.so.6 (0x00002af013750000) /lib64/ld-linux-x86-64.so.2 (0x00002af013008000) de801:/# cat /proc/sys/kernel/threads-max 1589248 de801:/# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 794624 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 128 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Here is the output of the C program de801:/test# ./thread-limit Creating threads ... Address of c = 1061520 KB Address of c = 1081300 KB Address of c = 1080904 KB Address of c = 1081168 KB Address of c = 1080508 KB Address of c = 1080640 KB Address of c = 1081432 KB Address of c = 1081036 KB Address of c = 1080772 KB 100 threads so far ... 200 threads so far ... 300 threads so far ... 400 threads so far ... 500 threads so far ... 600 threads so far ... Failed with return code 12 creating thread 637. Any ideas how to fix this please ?

    Read the article

  • Aruba Wireless Controller 200 and AP70 manual

    - by techie
    I have an Aruba wireless system that is currently in use but there is no documentation from the previous person in charge. I have no manuals or login information for the wireless controllers and APs. I checked the Aruba website and you need to register to access the support information but registration isn't instant and takes several days. I've waited for quite a while now and have tried googling and checking the Aruba forums but have found no indication of a manual. What I really need is the ability to reset the controller and APs so I can access the device with the default username and password. There is no reset button on this device so I have no idea how you go about resetting the controller and APs. Hm it seems I can't create a new tag as a new user. If possible can someone add an "Aruba" tag?

    Read the article

  • Is MS Forefront Add-in for Exchange server detecting HTML/Redirector.C incorrectly?

    - by rhart
    Users of a website hosted by our organization occasionally send complaints that our registration confirmation emails are infected with HTML/Redirector.C. They are always using an MS Exchange Server with the MS Forefront for Exchange AV add-in. The thing is, I don't think the detection is legitimate. I think the issue is that the link in the email we send causes a redirect. I should point out that this is done for a legitimate purpose. :) Has anybody run into this before? Naturally, Microsoft provides absolutely no good information on this one: http://www.microsoft.com/security/portal/Threat/Encyclopedia/Entry.aspx?Name=Trojan%3aHTML%2fRedirector.C&ThreatID=-2147358338 I can't find any other explanation of HTML/Redirector.C on the Internet either. If anyone knows of a real description for this virus that would be greatly appreciated as well.

    Read the article

  • Wildcard 'A' record overriding CNAME record

    - by user116890
    I have a Wildcard * A record for self-registration of subdomains by users on our web app. All works fine. I now need to set up an alias for support.mydomain.com to point to mydomain.freshdesk.com. I created a CNAME record as per instructions however it appears my Wildcard A record is overriding the CNAME entry. Any thoughts on how to resolve this? I need the wildcard so creating an A record for each user subdomain is not possible. Thanks.

    Read the article

  • Administer postgres from PGAdmin on remote mac using ssh tunnel

    - by Aidan Ewen
    I've got PostgreSQL installed on a Ubuntu server and I'm trying to connect to that server using PGAdmin on a remote macbook. I've created an ssh tunnel - macbook:~postgres$ ssh -L 5423:localhost:5432 [email protected] And I can connect using psql on the macbook as expected - macbook:~ me$ psql -U postgres -p 5423 -h localhost ... postgres=# In the 'New Server Registration' window on PGAdminIII I'm entering the following credentials - Name - MyServer Host - localhost Port - 5423 Maintenance DB - postgres Username - postgres Password - <remote_postgres_password> However the connection fails - Error connecting to the server: FATAL: password authentication failed for user "postgres" Not sure what's going on here, these seem to be the same credentials I've used for psql.

    Read the article

  • Backup strategy for developer-focused Apple environments?

    - by ewwhite
    It's interesting to see the technological split between structured corporate environments and more developer-driven/startup environments. Some of the Microsoft technologies I take for granted (VSS, Folder Redirection, etc.) simply are not available when managing the increasing number of Apple laptops I see in DevOps shops. I'm interested in centralized and automated backup strategies for a group of 30-40 Apple laptops... How is this typically done safely and securely, assuming these are company-owned machines (versus BYOD)? While Apple has Time Machine, it's geared toward individual computer backups and doesn't seem to work reliably in a group setting. Another issue with these workstations is the presence of Vagrant/Virtual Box VMs on the developers' systems. Time Machine and virtual machines typically don't work well unless the VMs are excluded from the backup set. I'd like a push-based backup process with some flexible scheduling options. I know how to handle the backend storage, but I'm not sure on what needs to be presented to the client systems. Due to the nature of the data here, cloud-based backup may not be a viable option. Any suggestions about how you handle this in your environment would be appreciated. Edit: The virtual machine backups are no longer important. They can be excluded from the process and planning.

    Read the article

  • Unable to call through asterisk

    - by sk
    I want to create a voip service. I have installed asterisk-1.4 on a dedicated remotely hosted debian lenny distro. I made a sip.conf and extensions.conf so as to place a call between two sip phones(i am using xlite 3.0) installed in some other Windows PC. Whenever i switch this phones the asterisk console shows that Registration from '"1000"<sip:[email protected]>' failed for '122.168.10.254' - Peer is not supposed to register Where xx.xx.xx.xx is the server's IP. i.e my sip phones are unable to register with the asterisk server. Please help me to place call between two sip phones #sip show peers Name/username Host Dyn Nat ACL Port Status 2000 (Unspecified) D 0 Unmonitored 1000 (Unspecified) D 0 Unmonitored 2 sip peers [Monitored: 0 online, 0 offline Unmonitored: 0 online, 2 offline] # sip show registry Host Username Refresh State Reg.Time # sip show channels Peer User/ANR Call ID Seq (Tx/Rx) Format Hold Last Message 0 active SIP channels

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Dual-boot two instances of the same copy of Vista using same serial number

    - by fred.bear
    I have a single Vista OEM registration number which has been registered to the current hardware (single partition only), and I want to use it for two instances of dual-booted Vista... Because both Vistas will be running on the same hardware, (and obviously only one at any one time), will they both be recognized as genuine to Windows Update etc... ie will they both pass the Windows Genuine Advantage requirements? .. The only difference between the hardware involved will be the partition. Also, are there any special issues with dual (tripple?) booting two instances of Vista and also one Ubuntu OS?

    Read the article

  • Aruba Wireless Controller 200 and AP70 manual

    - by techie
    I have an Aruba wireless system that is currently in use but there is no documentation from the previous person in charge. I have no manuals or login information for the wireless controllers and APs. I checked the Aruba website and you need to register to access the support information but registration isn't instant and takes several days. I've waited for quite a while now and have tried googling and checking the Aruba forums but have found no indication of a manual. What I really need is the ability to reset the controller and APs so I can access the device with the default username and password. There is no reset button on this device so I have no idea how you go about resetting the controller and APs. Hm it seems I can't create a new tag as a new user. If possible can someone add an "Aruba" tag?

    Read the article

  • What does it mean to setup Postfix as "SMTP only"? [closed]

    - by BryanWheelock
    Possible Duplicate: What does it mean to setup Postfix as “SMTP only”? I am trying to setup Postfix a few different domains on a virtual host. I need to have email setup just to send out registration confirmations and new password requests. No one will have a mailbox on this server. It seems this means that I want to setup Postfix as SMTP only. I've also read about configuring Postfix null clients for simular needs. What is the difference between Postfix null client and SMTP only?

    Read the article

  • Where to store short strings (with my key) on the internet?

    - by Vi
    Is there simple service to store strings under my key that can be used by bots? Requirements: Simple command line access, automatic posting allowed No need to keep some session with the service alive I choose the key (so pastebins fail) No requirement for registration/authentication (for simplicity) The string should be kept for about a month. I want something like: Store: $ echo some_data_0x1299C0FF | store_my_string testtest2011 Retrieve: $ retrive_my_string testtest2011 some_data_0x1299C0FF Do you have ideas what should I use for it? I can only think of using IRC somehow (channel topics, /whowas, ...), but this is too complex for this simple task. No security is needed: anyone can update my string. The task looks very simple, so I expect the solution to be similarly simple. Expecting something like single simple curl call.

    Read the article

  • How to add assemblies in a 64-bit machine?

    - by marko
    My old cmd-script: C:\Windows\Microsoft.NET\Framework\v2.0.50727\RegAsm blabla.dll C:\Windows\Microsoft.NET\Framework\v2.0.50727\GacUtil -i blabla.dll (Which works fine in my old machine.) But now I have a script for a 64-bit machine (Windows Server 2008 R2): C:\Windows\Microsoft.NET\Framework64\v2.0.50727\RegAsm blabla.dll C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\NETFX 4.0 Tools\GacUtil -i blabla.dll Then I get this message: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\RegAsm blabla.dll Microsoft (R) .NET Framework Assembly Registration Utility 2.0.50727.5420 Copyright (C) Microsoft Corporation 1998-2004. All rights reserved. Types registered successfully C:\Program Files\Microsoft SDKs\Windows\v7 .1\Bin\NETFX 4.0 Tools\GacUtil -i blabla.dll 'C:\Program' is not recognized as an internal or external command, operable program or batch file. The second command is not successful.

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • Firewall issue with multiple SIP PROXY / REGISTRAR servers

    - by MikeBrom
    Hi We have a pair of Internet-facing SIP PROXY/REGISTRAR servers (for resilienced and load-balancing). When a SIP phone registers, it will be handled by one of the REGISTRAR servers (round-robin DNS) - and since this registration is renewed, the firewall port/address translation is maintained. Therefore, when a call is to be sent back to the phone the INVITE message passes successfully through the firewall. However, it is likely that the phone may register with one of the two servers, but the INVITE may come from the other. In this situation, the call fails since there is no translation in place on the firewall. Is there a feature in the SIP protocol to facilitate this? Any other ideas? As our traffic grows, we will no doubt end-up with more than two servers - so the problem will escalate. Thanks, Mike

    Read the article

  • Why won't my Windows 7 KMS key work on my Server 2008 KMS server?

    - by Ryan Bolger
    Our Microsoft volume licensing site was recently updated to include our Windows 7 and Server 2008 R2 KMS keys. We have an existing KMS server running on Server 2008 (not R2). In an attempt to be proactive about supporting the new OSes in our environment, I unregistered the old KMS key with slmgr.vbs and tried registering the new key. The registration failed with Error 0xC004F050. The description for that error was "The Software Licensing Service reported that the product key is invalid." What's wrong? I've checked and double checked that for typos against what is listed on the Volume Licensing website.

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • /etc/security/limits.conf for setting program limits in Linux

    - by Flavius Akerele
    I have the following inside /etc/security/limits.conf (I have specified root separately because * will not include it.) user2 - core unlimited * - core 0 root - core 0 * - rss 512000 root - rss 512000 * - nproc 100 root - nproc 100 * - maxlogins 1 root - maxlogins 1 I run a program as user2 (./programname) but /proc/3498/limits says cores are disabled: Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us Both ulimit -Sa and ulimit -Ha output that cores are disabled: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14001 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) 512000 open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) unlimited max user processes (-u) 100 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Why are cores disabled ?

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • The Server Fault Wiki of recommended practices [migrated]

    - by Avery Payne
    So I've noticed that there are several recommendations on basic practices on Server Fault, but there doesn't seem to be a cohesive view as to how those recommendations would all fit together. So I thought I would lump these together as a kind of mental exercise to see what the "ServerFault Community IT Department" would look like if it were implemented. This would give a few things: it would make a reasonable wiki (in the true wiki spirit of many contributions), it would provide several links to well-vetted practices, and it would be kind of fun to see what the amalgamation would look like. And who knows, it may even point out some interesting issues between different forms of "best practices", although I would be stunned if there was a conflict hidden in there someplace... Add your favorites from Server Fault as answers, and I'll re-edit this section with the results. Here's a few catagories to collect different ideas together. Hardware Configuration(s) Server room configuration. Server room temperature Firmware Updates and Scheduling Storage Configuration(s) Selecting a NAS box Linux: Dealing with /tmp Linux: Install apps in /var or /opt? Network Configuration(s) checking DNS health and compliance Security Practice(s) Password (General) Best Practices Password sharing methods Windows Update Updating Windows Servers that are hosts for VMs Network Service(s) User Service(s) User Naming & Deletion Upgrade Process(es) Disaster Recovery Checking Backups Documenting an outage for a post-mortem review Last Edit: 2010-02-17

    Read the article

  • MGE UPS Cut power - What happened?

    - by JT.WK
    I have 3 x MGE Pulsar M 3000 2700w UPS units within my server room which have run perfectly up until now. On Saturday morning I noticed that one of these UPS units was no longer outputting power, the lcd displayed a message saying "load not powered" and told me to press the power button to start output. Needless to say that the servers, switches and routers is was supporting were all turned off. I tried pressing and even holding the power button, but the unit refused to start back up again. Only power cycling the unit got it back up again. I have checked the logs on the UPS, although they were useless. Nothing out of the ordinary, and no email notifications had been sent. The output level sits on about 51% and all battery checks are OK. It is now three days on and the UPS is still up and running (although I am scheduling an outage to get it out of there ASAP). Does anyone have any idea what could have gone wrong here? Is there anything else that I can check that could help?

    Read the article

  • Reducing the time between checks on a Nagios server

    - by Farooq Hussain
    Can anyone let me know how I would reduce time between Last Check Time and Next Scheduled Check on a particular service. I have a very critical task to monitor and the time between checks is currently 5 minutes, which is too long for this service. Can I reduce that time? I need this to be 1 minute or even 30 seconds. I want Nagios to check this service every 30 seconds. I currently have defined the service as follows: define service{ use local-service host_name OpenSIP,test-RTSIP service_description SIP Registration check_command check_nrpe!check_sipreg check_freshness 0 freshness_threshold 900 active_checks_enabled 1 passive_checks_enabled 1 }

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Maximum limit of filepointer in php reached and not changeable

    - by mlaug
    I have a server with the current 5.3.x version installed. Since we are running a really simple and small server in php using sockets, that connects to a lot clients using sockets we need to raise the open file limit that has been already done on the server for the user, that runs the server #ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 29879 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 29879 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited and we compiled php with --enable-fd-setsize=8192 still we are getting [19-Nov-2012 09:24:23 Europe/Berlin] PHP Warning: socket_select(): You MUST recompile PHP with a larger value of FD_SETSIZE. It is set to 1024, but you have descriptors numbered at least as high as 1024. --enable-fd-setsize=2048 is recommended, but you may want to set it to equal the maximum number of open files supported by your system, in order to avoid seeing this error again at a later date. once in a while in our logs. Anyone knows who to configure the unix server and php correctly to have that working? I found a bug, but that is related to 2006 and marked as "not a bug" https://bugs.php.net/bug.php?id=37025&edit=1

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >