Search Results

Search found 99 results on 4 pages for 'ulimit'.

Page 1/4 | 1 2 3 4  | Next Page >

  • ulimit not reflected for jenkins slave

    - by techastute
    Problem Got java.io.IOException: Too many open files in solr indexing through jenkins. Did some googling and found we have to set the ulimit for the box in where we are running the job. So set the ulimit in a linux box with spec Linux x86_64 GNU/Linux in both of the following fashions ulimit -n 1000000 /etc/security/limits.conf userx soft nofile 1000000 userx hard nofile 1000000 Given userx is the user through which the jenkins job is being executed. when doing ssh to the box as userx manually through terminal and check ulimit -n am getting 10000000 Question But when executing the same ulimit -n through a jenkins job, only getting 1024 which is the default. Any advice would be much helpful?

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • setting ulimit and ubuntu 8.04

    - by Wizzard
    Hi there, We have two ubuntu 8.04 servers. With the database server I set the table_cache to 1000 however when I restart mysql the status only shows 257 and the open files limit says 1024 I adjusted ulimit by doing ulimit -n 8192 and then restarting mysql; this seemed to do the tick however after a few hours I did ulimit -n and saw it had returned back to 1024 Bit of a worry. I edited the /etc/security/limits.conf and added mysql soft nofile 8192 mysql hard nofile 8192 then rebooted, no change. I then edited and change mysql to * rebooted, no change I then edited and changed it to one line * - nofile 8192 and rebooted, no change. cat /proc/sys/fs/file-max gives me 768730 sysctl fs.file-max gives me fs.file-max = 768730 I am at a bit of a loss to how I can set and keep the ulimit value set so I can increase the table cache properly on mysql.

    Read the article

  • daemontools and ulimit

    - by oberstet
    I have a service run under daemontools defined like: /service/myservice/run = #!/bin/sh exec setuidgid someuser somecommand Now, if I run this as a script directly from a root shell, somecommand will get a correct ulimit (unlimited). However, when I start the service using svc -u /service/myservice then somecommand does get a ulimit effectively slightly above 11000. How can I have somecommand get the correct ulimit even when started via svc (not from a shell)? This is on FreeBSD 9 release.

    Read the article

  • Ulimit settings in Oracle 11g on Linux 5

    - by Stuart
    Is there an issue with "Ulimit -Hn" being set too low (at 1024) when (Oracle recommend 65536)? This is for Oracle 64-bit 11g on Linux 5. It is one of the settings that appears to be woefully short of its recommendation. But I am also aware that the database server in question is an Oracle Data Guard Local Standby and should only really have a connection or two from its Primary database server (to ship the redo logs across). The Local Standby database server has 'hung' about 3 times in as many months and then requires a reboot. I do not have access to this server, so rely on others to look at logs etc. The sanity check on kernel params uncovered the low value for "ulimit -Hn". Has anyone ever seen that 'low' value cause a hang or crash?

    Read the article

  • Best way to override 1024 process ulimit

    - by CamelBlues
    On CentOS distros, there is an /etc/security/limits.d/90-noproc.conf that sets a process limit for all users: # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 1024 I'd like to keep this limit in there, but allow one user to have more than 1024 processes. Because of how the server is puppetized, I'm unable to use the built-in bash ulimit command.

    Read the article

  • Why does limiting my virtual memory to 512MB with ulimit -v crash the JVM?

    - by Narinder Kumar
    I am trying to enforce maximum memory a program can consume on a Unix system. I thought ulimit -v should do the trick. Here is a sample Java program I have written for testing : import java.util.*; import java.io.*; public class EatMem { public static void main(String[] args) throws IOException, InterruptedException { System.out.println("Starting up..."); System.out.println("Allocating 128 MB of Memory"); List<byte[]> list = new LinkedList<byte[]>(); list.add(new byte[134217728]); //128 MB System.out.println("Done...."); } } By default, my ulimit settings are (output of ulimit -a) : core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited When I execute my java program (java EatMem), it executes without any problems. Now I try to limit max memory available to any program launched in the current shell to 512MB by launching the following command : ulimit -v 524288 ulimit -a output shows the limit to be set correctly (I suppose): core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 31398 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 31398 virtual memory (kbytes, -v) 524288 file locks (-x) unlimited If I now try to execute my java program, it gives me the following error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. Ideally it should not happen as my Java program is only taking around 128MB of memory which is well within my specified ulimit parameters. If I change the arguments to my Java program as below: java -Xmx256m EatMem The program again works fine. While trying to give more memory than limited by ulimit like : java -Xmx800m EatMem results in expected error. Why the program fails to execute in the first case after setting ulimit ? I have tried the above test on Ubuntu 11.10 and 12.0.4 with Java 1.6 and Java 7

    Read the article

  • where are the default ulimit values set? (linux, centos)

    - by nomercysir
    I have two CentOS (5) servers with nearly identical specs. When I login and do: ulimit -u on one machine, I get unlimited and on the other, 77824 When I run a cron like * * * * * ulimit -u > ulimit.txt I get the same results (unlimited, 77824). I am trying to determine where these are set so that I can alter them. They are not set in any of my profiles (.bashrc, /etc/profile, etc .. these wouldn't affect cron anyway) nor in /etc/security/limits.conf (which is empty). I have scoured google and even gone so far as to do grep -Ir 77824 / but nothing has turned up so far. I don't understand how these machines could have come preset with different limits. I am actually wondering not for these machines, but for a different (CentOS 6) machine which has a limit of 1024, which is far too small. I need to run cron jobs with a higher limit and the only way I know how to set that is in the cron job itself. That's ok, but I'd rather set it system wide so it's not as hacky. Thanks for any help. This seems like it should be easy (NOT) EDIT -- SOLVED Ok, I figured this out. It seems to be an issue either with CentOS 6 or perhaps my machine configuration. On the CentOS 5 configuration, I can set in /etc/security/limits.conf: * - nproc unlimited and that would effectively update the accounts and cron limits. However, this does not work in my CentOS 6 box. Instead, I must do: myname1 - nproc unlimited myname2 - nproc unlimited ... And things work as expected. Maybe the UID specification works to, but the wildcard (*) definitely DOES NOT here. Oddly, wildcards DO work for the 'nofile' limit. I still would love to know where the default values are actually coming from, because by default, this file is empty and I couldn't see why I had different defaults for the two CentOS boxes, which had identical hardware and were from the same provider.

    Read the article

  • How do ulimit -n and /proc/sys/fs/file-max differ?

    - by bantic
    I notice that on a new CentOS image that I just booted up off of EC2 that the ulimit default is 1024 open files, but /proc/sys/fs/file-max is set at 761,408 and I'm wondering how these two limits work together. I'm guessing that ulimit -n is a per-user limit of number of file descriptors while /proc/sys/fs/file-max is system-wide? If that's the case, say I've logged in twice as the same user -- does each logged-in user have a 1024 limit on number of open files, or is it a limit of 1024 combined open files between each of those logged-in users? And is there much performance impact to setting your max file descriptors to a very high number, if your system isn't ever opening very many files?

    Read the article

  • After setting ulimit to unlimited, I am not able to login to machine

    - by user419534
    In one of requirment, I had to set ulimit on one of my machine to unlimited. For this I changed following in /etc/security/limits.conf as below # End of file oracle soft nofile unlimited oracle hard nofile unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 50000000 oracle hard memlock 50000000 * soft nofile unlimited * hard nofile unlimited and changed /etc/profile if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p unlimited ulimit -n unlimited else ulimit -u unlimited -n unlimited fi fi I logged out. I am not able to connect ot machine at all. could you please someone help on this.

    Read the article

  • How to limit memory of a OS X program? ulimit -v neither -m are working

    - by hectorpal
    My programs run out of memory like half of the time I run them. Under Linux I can set a hard limit to the available memory using ulimit -v mem-in-kbytes. Actually, I use ulimit -S -v mem-in-kbytes, so I get a proper memory allocation problem in the program and I can abort. But... ulimit is not working in OSX 10.6. I've tried with -s and -m options, and they are not working. In 2008 there was some discussion about the same issue in MacRumors, but nobody proposed a good alternative. The should be a way a program can learn it's spending too much memory, or setting a limit through the OS.

    Read the article

  • Why isn't "(ulimit -d 1000; firefox) &" working?

    - by MaxB
    I'm trying to limit the memory usage of firefox to prevent it from stalling the whole system with problematic web sites. I tried, in bash: (ulimit -d 1000; firefox) & This should limit the memory usage to 1000kB. Then I opened YouTube, and noticed, in top, that firefox is using 2.6% of the memory, or about 200MB, and not crashing. Clearly the limit is being ignored. Why is that, and how can I enforce it correctly?

    Read the article

  • What is a safe ulimit ceiling?

    - by Kaustubh P
    This is the output of ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited This is a 64bit install, and I would like to increase the max-open files from 1024 to a more heady limit such as 5000. Will that be any problem? Will it cause instability? Thanks.

    Read the article

  • How to write java program to increase file limit using ulimit

    - by Sunil Kumar Sahoo
    I am using Fedora linux where ulimit -n 10000 increases file limit upto 10000. I want to achieve the same using java program How to write java program to increase file limit using ulimit I have tried with the below program but it didnot work well. The program didnot give any error. but didnot increase file limit also public class IncreaseFIle { public static void main(String[] args) { String command = "/bin/bash ulimit -n 10000"; // String command = "pwd"; try { Runtime.getRuntime().exec(command); } catch (IOException ex) { ex.printStackTrace(); } } } Thanks Sunil Kumar Sahoo

    Read the article

  • How to set ulimits for a service starting at boot?

    - by jayofdoom
    I need, for mysql to use large-pages, to set a ulimit - I've done this in limits.conf. However, limits.conf (pam_limits.so), doesn't get read in for init, only for "real" shells. I solved this before by adding a "ulimit -l" to the initscript start function. but I need some sort of repeatable way to do this, now that the boxes are managed with chef, and we don't want to take over a file that's actually owned by the RPM.

    Read the article

  • Why does redis report limit of 1024 files even after update to limits.conf?

    - by esilver
    I see this error at the top of my redis.log file: Current maximum open files is 1024. maxclients has been reduced to 4064 to compensate for low ulimit. I have followed these steps to the letter (and rebooted): Moreover, I see this when I run ulimit: ubuntu@ip-XX-XXX-XXX-XXX:~$ ulimit -n 65535 Is this error specious? If not, what other steps do I need to perform? I am running redis 2.8.13 (tip of the tree) on Ubuntu LTS 14.04.1 (again, tip of the tree). Here is the user info: ubuntu@ip-XX-XXX-XXX-XXX:~$ ps aux | grep redis root 1027 0.0 0.0 66328 2112 ? Ss 20:30 0:00 sudo -u ubuntu /usr/local/bin/redis-server /etc/redis/redis.conf ubuntu 1107 19.2 48.8 7629152 7531552 ? Sl 20:30 2:21 /usr/local/bin/redis-server *:6379 The server is therefore running as ubuntu. Here are my limits.conf file without comments: ubuntu@ip-XX-XXX-XXX-XXX:~$ cat /etc/security/limits.conf | sed '/^#/d;/^$/d' ubuntu soft nofile 65535 ubuntu hard nofile 65535 root soft nofile 65535 root hard nofile 65535 And here is the output of sysctl fs.file-max: ubuntu@ip-XX-XXX-XXX-XXX:~$ sysctl -a| grep fs.file-max sysctl: permission denied on key 'fs.protected_hardlinks' sysctl: permission denied on key 'fs.protected_symlinks' fs.file-max = 1528687 sysctl: permission denied on key 'kernel.cad_pid' sysctl: permission denied on key 'kernel.usermodehelper.bset' sysctl: permission denied on key 'kernel.usermodehelper.inheritable' sysctl: permission denied on key 'net.ipv4.tcp_fastopen_key' as sudo ubuntu@ip-10-102-154-226:~$ sudo sysctl -a| grep fs.file-max fs.file-max = 1528687 Also, I see this error at the top of the redis.log file, not sure if it's related. It makes sense that the ubuntu user isn't allowed to change max open files, but given the high ulimits I have tried to set he shouldn't need to: [1050] 23 Aug 21:00:43.572 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. [1050] 23 Aug 21:00:43.572 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.

    Read the article

  • How do I increase the open files limit for a non-root user?

    - by iCode
    This is happening on Ubuntu Release 12.04 (precise) 64-bit Kernel Linux 3.2.0-25-virtual I'm trying to increase the number of open files allowed for a user. This is for an my ecplise java application where the current limit of 1024 is not enough. According to the posts I've found so far, I should be able to put lines into /etc/security/limits.conf like this; soft nofile 4096 hard nofile 4096 to increase the number of open files allowed for all users. But, that's not working for me, and I think the problem is not related to that file. For all users, the default limit is 1024, regardless of what is in /etc/security/limits.conf (I have been rebooting after changing that file) $ ulimit -n 1024 Now, despite the entries in /etc/security/limits.conf I can't increase that; $ ulimit -n 2048 -bash: ulimit: open files: cannot modify limit: Operation not permitted The weird part is that I can change the limit downwards, but can't change it upwards - even to go back to a number which is below the original limit; $ ulimit -n 800 $ ulimit -n 800 $ ulimit -n 900 -bash: ulimit: open files: cannot modify limit: Operation not permitted As root, I can change that limit to whatever I want, up or down. It doesn't even seem to care about the supposedly system-wide limit in /proc/sys/fs/file-max # cat /proc/sys/fs/file-max 188897 # ulimit -n 188898 # ulimit -n 188898 So far, I haven't found any way to increase the open files limit for a non-root user, and I really don't want to be running my application as root. How should I properly do this? I have looked at all the posted and tried the given options but no luck!

    Read the article

  • How to set ulimits in Solaris 10

    - by James Bradley
    I normally use pam_limits.so and /etc/security/limits.conf to set ulimits on filesize/cputime etc for the regular users logging in to my server running Ubuntu. Can anyone give the best way of doing similar with Solaris 10. I think it is done using /etc/system but have no idea what to add to the file or indeed if it is the correct file. I'm particularly interested in setting up ulimit -f without going down the .profile route.

    Read the article

  • JBoss deployment throws 'java.util.zip.ZipException: error in opening zip file' on Linux?

    - by Kaushalya
    I thought of posting both the question and the answer for others' knowledge. I deployed a large EAR (contained more than ~1024 jars/wars) on JBoss running with Java 6 on Linux, and the deployment process cried throwing the following exception: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file) at org.jboss.deployment.DeploymentException.rethrowAsDeploymentException(DeploymentException.java:53) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:901) at org.jboss.deployment.MainDeployer.init(MainDeployer.java:895) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:809) at org.jboss.deployment.MainDeployer.deploy(MainDeployer.java:782) .... Caused by: java.lang.RuntimeException: java.util.zip.ZipException: error in opening zip file at org.jboss.util.file.JarArchiveBrowser.<init>(JarArchiveBrowser.java:74) at org.jboss.util.file.FileProtocolArchiveBrowserFactory.create(FileProtocolArchiveBrowserFactory.java:48) at org.jboss.util.file.ArchiveBrowser.getBrowser(ArchiveBrowser.java:57) at org.jboss.ejb3.EJB3Deployer.hasEjbAnnotation(EJB3Deployer.java:213) .... This was caused by the 'limit of number of open file descriptors' of Linux/Unix operating systems. The default is 1024. You can check the default value using: ulimit -n To increase the number of open file descriptors (say to 2048): ulimit -n 2048 Check the man page of ulimit for more details.

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • Will open files limit in Centos affect HTTP connections? Does the limit apply to a single session or all sessions?

    - by forestclown
    When I do a ulimit -n I got 256, I assume it means I can open 256 files at the sametime. Does it means I can open 256 files with one single session? or all sessions? For example, I logined to my server with username "abc" (via putty/ssh), and open 200 files, with the session still running, I logined to the same server again with the same username "abc" (via putty/ssh), I can open only another 56 files? or I can open another 256 files? Lastly, does this limit also restrict number of http connections? e.g. with the above example, I have opened 200 files, and then I use "wget" or "curl" to make http connections. Thanks

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

1 2 3 4  | Next Page >