Search Results

Search found 3641 results on 146 pages for 'threads'.

Page 80/146 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to choose python version to install in gentoo

    - by Shamanu4
    Hello, I'm using linux gentoo and i want to install python2.5 but it's a problem. emerge -av python shows These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild U ] dev-lang/python-3.1.2-r3 [3.1.1-r1] USE="gdbm ipv6 ncurses readline ssl threads (wide-unicode%*) xml -build -doc -examples -sqlite* -tk -wininst (-ucs2%)" 9,558 kB [ebuild U ] app-admin/python-updater-0.8 [0.7] 8 kB and there are ebuild for more versions: # ls /usr/portage/dev-lang/python ChangeLog files Manifest metadata.xml python-2.4.6.ebuild python-2.5.4-r4.ebuild python-2.6.4-r1.ebuild python-2.6.5-r2.ebuild python-3.1.2-r3.ebuild How to choose ebuild that I want? (python-2.5.4-r4)

    Read the article

  • Running multiple image manipulations in parallel causing OutOfMemory exception

    - by Tom
    I am working on a site where I need to be able to split and image around 4000x6000 into 4 parts (amongst many other tasks) and I need this to be as quick as possible for multiple users. My current code for doing this is var bitmaps = new RenderTargetBitmap[elements.Length]; using (var stream = blobService.Stream(key)) { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.StreamSource = stream; bi.EndInit(); for (var i = 0; i < elements.Length; i++) { var element = elements[i]; TransformGroup transformGroup = new TransformGroup(); TranslateTransform translateTransform = new TranslateTransform(); translateTransform.X = -element.Left; translateTransform.Y = -element.Top; transformGroup.Children.Add(translateTransform); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.PushTransform(transformGroup); cont.DrawImage(bi, new Rect(new Size(bi.PixelWidth, bi.PixelHeight))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); bitmaps[i] = rtb; } } for (var i = 0; i < bitmaps.Length; i++) { using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmaps[i])); encoder.Save(ms); var regionKey = WebPath.Variant(key, elements[i].Id); saveBlobService.Save("image/png", regionKey, ms); } } I am running multiple threads which take jobs off a queue. I am finding that if this part of code is hit by 4 threads at once I get an OutOfMemory exception. I can stop this happening by wrapping all the code above in a lock(obj) but this isn't ideal. I have tried wrapping just the first using block (where the file is read from disk and split) but I still get the out of memory exceptions (this part of the code executes quite quickly). I this normal considering the amount of memory this should be taking up? Are there any optimisations I could make? Can I increase the memory available? UPDATE: My new code as per Moozhe's help public static void GenerateRegions(this IBlobService blobService, string key, Element[] elements) { using (var stream = blobService.Stream(key)) { foreach (var element in elements) { stream.Position = 0; BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.SourceRect = new Int32Rect(element.Left, element.Top, element.Width, element.Height); bi.StreamSource = stream; bi.EndInit(); DrawingVisual vis = new DrawingVisual(); DrawingContext cont = vis.RenderOpen(); cont.DrawImage(bi, new Rect(new Size(element.Width, element.Height))); cont.Close(); RenderTargetBitmap rtb = new RenderTargetBitmap(element.Width, element.Height, 96d, 96d, PixelFormats.Default); rtb.Render(vis); using (MemoryStream ms = new MemoryStream()) { PngBitmapEncoder encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(rtb)); encoder.Save(ms); var regionKey = WebPath.Variant(key, element.Id); blobService.Save("image/png", regionKey, ms); } } } }

    Read the article

  • How can I poll different aws sqs in the same process?

    - by Luccas
    What is the right way to poll from differents AWS SQS in the same process? Suppose I have a ruby script: listen_queues.rb and run it. Should I need to create threads to wrap each SQS poll or start sub processes? t1 = Thread.new do queue1.poll do |msg| .... end t2 = Thread.new do queue2.poll do |msg| .... end t2.join I tried this code, but the poll is not receiving any of the messages available. When I run only one of them (t1 or t2), it works. But I need the 2 running. What is going on? Thanks!!

    Read the article

  • java.lang.OutOfMemoryError on ec2 machine

    - by vinchan
    I have a java app on a large instance that will spawn up to 800 threads. I can run the application fine as user "root" but not as another user which I created. I get the deadly. java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:657) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:943) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1325) nightmare. I have tried increasing the stack size already in limits.conf to no avail. Please, help me out. What is different here for the root and other user?

    Read the article

  • Schedule Task run Without Being Logged in

    - by Webs
    I have seen similar threads here and on the net, but I think my question is slightly different than what I can find... I have a script that runs perfectly when logged in with a service account I created specifically to run this script. But when I schedule it to run it hangs when trying to launch IE (the first part of my script). Without being logged in with that account I can watch the processes with task manager and see the processes running, but the script never finishes. I want to be able to run this script without needing to be logged in at all or even have the account be locked all the times. Is this possible? Or do I have to have the user account logged in? Any help would be greatly appreciated!

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • dansguardian error: filterports must match number of filterips (pfsense)

    - by Bulki
    Hi I'm setting up pfsense with squid3 and dansguardian packages. When I try to start the dansguardian service however, I get the following errors: May 27 22:17:37 php: /pkg_edit.php: The command '/usr/local/etc/rc.d/dansguardian.sh start' returned exit code '1', the output was 'kern.ipc.somaxconn: 16384 -> 16384 kern.maxfiles: 131072 -> 131072 kern.maxfilesperproc: 104856 -> 104856 kern.threads.max_threads_per_proc: 4096 -> 4096 Starting dansguardian. filterports (2) must match number of filterips (1) Error parsing the dansguardian.conf file or other DansGuardian configuration files /usr/local/etc/rc.d/dansguardian.sh: WARNING: failed to start dansguardian' May 27 22:17:37 root: /usr/local/etc/rc.d/dansguardian.sh: WARNING: failed to start dansguardian May 27 22:17:37 dansguardian[52944]: Error parsing the dansguardian.conf file or other DansGuardian configuration files May 27 22:17:37 dansguardian[52944]: filterports must match number of filterips What does "filterports must match number of filterips" mean? Any thoughts on the matter?

    Read the article

  • Macports irssi & perl5 installation issues

    - by Dmitri DB
    Long time reader, first time poster. Big, appreciative thanks for everyone's collective questioning and answering here and at stackoverflow, it's helped me quite a lot over the time I've been learning answers through these sites! Apologies in advance if I didn't search hard enough on posts already up on this site to find out what I could do about this issue, but I thought I'd just reach out for the sake of trying at least once. I've experienced this issue while starting up my macports-installed version of irssi: 13:25 -!- Irssi: Error in script dispatch: 13:25 Can't locate lib.pm in @INC (@INC contains: /opt/local/lib/perl5/site_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.4 /opt/local/lib/perl5/vendor_perl/5.12.4/darwin-multi-2level /opt/local/lib/perl5/vendor_perl/5.12.4 /opt/local/lib/perl5/5.12.4/darwin-multi-2level /opt/local/lib/perl5/5.12.4 /opt/local/lib/perl5/site_perl/5.12.3/darwin-multi-2level /opt/local/lib/perl5/site_perl/5.12.3 /opt/local/lib/perl5/site_perl /opt/local/lib/perl5/vendor_perl .) at (eval 18) line 1. 13:25 BEGIN failed--compilation aborted at (eval 18) line 1. 13:25 Huh, strange. I looked into it a bit: [email protected] /opt/local/lib/perl5 ?- find . -name "lib.pm" -ls 14673887 16 -r--r--r-- 1 root admin 6853 25 Jun 23:39 ./5.12.4/darwin-thread-multi- 2level/lib.pm [email protected] /opt/local/lib/perl5 ?- l 5.12.4/darwin-thread-multi-2level total 1864 drwxr-xr-x 55 root admin 1870 28 Jun 19:28 . drwxr-xr-x 158 root admin 5372 28 Jun 19:28 .. -rw-r--r-- 1 root admin 177814 25 Jun 23:39 .packlist drwxr-xr-x 6 root admin 204 28 Jun 19:28 B -r--r--r-- 1 root admin 25714 25 Jun 23:39 B.pm drwxr-xr-x 64 root admin 2176 28 Jun 19:28 CORE drwxr-xr-x 3 root admin 102 28 Jun 19:28 Compress -r--r--r-- 1 root admin 3000 25 Jun 23:39 Config.pm -r--r--r-- 1 root admin 228094 25 Jun 23:39 Config.pod -r--r--r-- 1 root admin 409 25 Jun 23:39 Config_git.pl -r--r--r-- 1 root admin 38759 25 Jun 23:39 Config_heavy.pl -r--r--r-- 1 root admin 21174 25 Jun 23:39 Cwd.pm -r--r--r-- 1 root admin 63535 25 Jun 23:39 DB_File.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 Data drwxr-xr-x 5 root admin 170 28 Jun 19:28 Devel drwxr-xr-x 4 root admin 136 28 Jun 19:28 Digest -r--r--r-- 1 root admin 25185 25 Jun 23:39 DynaLoader.pm drwxr-xr-x 22 root admin 748 28 Jun 19:28 Encode -r--r--r-- 1 root admin 29731 25 Jun 23:39 Encode.pm -r--r--r-- 1 root admin 6736 25 Jun 23:39 Errno.pm -r--r--r-- 1 root admin 5445 25 Jun 23:39 Fcntl.pm drwxr-xr-x 5 root admin 170 28 Jun 19:28 File drwxr-xr-x 3 root admin 102 28 Jun 19:28 Filter -r--r--r-- 1 root admin 1819 25 Jun 23:39 GDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Hash drwxr-xr-x 3 root admin 102 28 Jun 19:28 I18N drwxr-xr-x 11 root admin 374 28 Jun 19:28 IO -r--r--r-- 1 root admin 1404 25 Jun 23:39 IO.pm drwxr-xr-x 6 root admin 204 28 Jun 19:28 IPC drwxr-xr-x 4 root admin 136 28 Jun 19:28 List drwxr-xr-x 4 root admin 136 28 Jun 19:28 MIME drwxr-xr-x 3 root admin 102 28 Jun 19:28 Math -r--r--r-- 1 root admin 2519 25 Jun 23:39 NDBM_File.pm -r--r--r-- 1 root admin 4208 25 Jun 23:39 O.pm -r--r--r-- 1 root admin 15563 25 Jun 23:39 Opcode.pm -r--r--r-- 1 root admin 21011 25 Jun 23:39 POSIX.pm -r--r--r-- 1 root admin 58962 25 Jun 23:39 POSIX.pod drwxr-xr-x 5 root admin 170 28 Jun 19:28 PerlIO -r--r--r-- 1 root admin 2515 25 Jun 23:39 SDBM_File.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Scalar -r--r--r-- 1 root admin 10837 25 Jun 23:39 Socket.pm -r--r--r-- 1 root admin 41003 25 Jun 23:39 Storable.pm drwxr-xr-x 4 root admin 136 28 Jun 19:28 Sys drwxr-xr-x 3 root admin 102 28 Jun 19:28 Text drwxr-xr-x 5 root admin 170 28 Jun 19:28 Time drwxr-xr-x 3 root admin 102 28 Jun 19:28 Unicode -r--r--r-- 1 root admin 14462 25 Jun 23:39 attributes.pm drwxr-xr-x 38 root admin 1292 28 Jun 19:28 auto -r--r--r-- 1 root admin 19892 25 Jun 23:39 encoding.pm -r--r--r-- 1 root admin 6853 25 Jun 23:39 lib.pm -r--r--r-- 1 root admin 11044 25 Jun 23:39 mro.pm -r--r--r-- 1 root admin 997 25 Jun 23:39 ops.pm -r--r--r-- 1 root admin 13945 25 Jun 23:39 re.pm drwxr-xr-x 3 root admin 102 28 Jun 19:28 threads -r--r--r-- 1 root admin 33283 25 Jun 23:39 threads.pm So, it sort of seems to me that the permissions which perl5 got installed with for these modules has gotten mixed up somehow? I'm not really a perl user beyond enjoying it for massive directory-recursive find/replace operations within text files, so I haven't much of an idea what the permissions here are supposed to look like, and I'm not really sure how to go about determining how macports has gone and installed perl this way when it's otherwise worked without failure for years now. Does anyone have any recommendations for the sanest path towards rectifying this issue? Also, is there any interesting reason as to why the macports default for the perl5 port installs 5.12.4, and not 5.16.0, which has to be explicitly installed via the perl5.16 port? Thanks again!

    Read the article

  • Slower than expected 802.11n wireless network speeds

    - by Ian
    I have two ASUS laptops running Windows 7 connected wirelessly via 802.11n at 150 Mbit, as reported by Task Manager. The router is Netgear WNDR3700. When testing the wireless connection speed using iperf, I'm not getting nearly 150 Mbit: C:\>iperf -c 10.0.0.123 -t 30 ------------------------------------------------------------ Client connecting to 10.0.0.123, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 10.0.0.116 port 53819 connected with 10.0.0.123 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-30.0 sec 41.2 MBytes 11.5 Mbits/sec That's a typical result. Running parallel client threads does not increase the overall total speed. Why would I only be getting 11.5 Mbit on a 150 Mbit connection?

    Read the article

  • MySQL-5.5.10 - Lost connection to MySQL server during query (Both Web Clients and MySQL Slaves)

    - by kwiksand
    We've just upgraded our existing MySQL5.1 DB servers to newer (much better) hardware with MySQL 5.5, and things have been going mostly smoothly for almost 6 weeks. Just the last few days, I've noticed a few errors, such as: From a MySQL Slave: [ERROR] Error reading packet from server: Lost connection to MySQL server during query ( server_errno=2013) Or From Apache/Other: Lost connection to MySQL server at 'reading initial communication packet', system error: 110 At one point this evening, many webnodes reported this error for a three minute period (many such reports as this was in a busy period). However, the issues don't appear to correspond with any times of extreme load. For all intents and purposes, the connection/thread load on MySQL is at a normal rate (between about 10 and 40 connected threads), and Web load has been a LOT higher at times over the last few weeks. Could there bee other reasons for these connection errors, that I'm not seeing?

    Read the article

  • Is that why the website is called serverfault <g>

    - by bmullan
    Couldn't help it... just trying to be funny... when trying to save my profile I got the message from your website see below. Or is that an "initiation" ha! Anyway... only read a couple threads but good so far and I hope to read more. Brian = = = = = = = = = = = = = = = = = = = = = = = = = = = We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site. It's not you, it's us. This is our fault. Detailed information about this error has automatically been recorded and we have been notified. Yes, we do look at every error. We even try to fix some of them. It's not strictly necessary, but if you'd like to give us additional information about this error, do so at our feedback site, meta.stackoverflow.com.

    Read the article

  • Single django instance with subdomains for each app in the django project

    - by jwesonga
    I have a django project (django+apache+mod_wsgi+nginx) with multiple apps, I'd like to map each app as a subdomain: project/ app1 (domain.com) app2 (sub1.domain.com) app3 (sub3.domain.com) I have a single .wsgi script serving the project, which is stored in a folder /apache. Below is my vhost file. I'm using a single vhost file instead of separate ones for each sub-domain: <VirtualHost *:8080> ServerAdmin [email protected] ServerName www.domain.com ServerAlias domain.com DocumentRoot /home/path/to/app/ Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess domain.com user=www-data group=www-data threads=25 WSGIProcessGroup domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> <VirtualHost *:8081> ServerAdmin [email protected] ServerName sub1.domain.com ServerAlias sub1.domain.com DocumentRoot /home/path/to/app Alias /admin_media/ /usr/local/lib/python2.6/dist-packages/django/contrib/admin/media <Directory /home/path/to/wsgi/apache/> Order deny,allow Allow from all </Directory> LogLevel warn ErrorLog /home/path/to/logs/apache_error.log CustomLog /home/path/to/logs/apache_access.log combined WSGIDaemonProcess sub1.domain.com user=www-data group=www-data threads=25 WSGIProcessGroup sub1.domain.com WSGIScriptAlias / /home/path/to/apache/kcdf.wsgi </VirtualHost> My Nginx configuration for the domain.com: server { listen 80; server_name domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } Configuration for the sub.domain.com: server { listen 80; server_name sub.domain.com; access_log off; error_log off; # proxy to Apache 2 and mod_wsgi location / { proxy_pass http://127.0.0.1:8081/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } This set up doesn't seem to work, everything seems to point to the main domain. I've tried http://effbot.org/zone/django-multihost.htm which kind of worked but seems to have issues with loading my css,images,js files.

    Read the article

  • Linux: find thin server running on port 80 and kill it

    - by Andrew
    On my Linux server I ran: sudo thin start -p 80 -d Now I'd like to restart the sever. The trouble is, I can't seem to get the old process to kill it. I tried: netstat -anp But what I see on port 80 is this: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - So, it didn't give me a PID to kill... I tried pgrep -l thin but that gave me nothing. Meanwhile pgrep -l ruby gives me like 6 processes running. I don't really understand why multiple ruby threads would be running, or which one I need to kill... How do I kill / restart the thin daemon?

    Read the article

  • Windows 7 File Associations .mov

    - by Paul Mung
    I created a new windows 7 SP1 base image. Everything is all fine and dandy with that. So i am now installing standard applications I would like Quicktime to manage .mov files. The only problem is WMP (Windows Media Player) won't give up the association to .mov files. It's driving me crazy... i've been reading threads on how to fix file associations. I would like to do it via registry, powershell or cmdline. I cannot use GPO I've tried the following: assoc .mov=QuickTime.mov ftype QuickTime.mov="%ProgramFiles(x86)\QuickTime\QuickTimePlayer.exe" Reg add HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\UserChoice" /v Progid /d QuickTime.mov /f Reg add HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\OpenWithList" /v a /d QuickTimePlayer.exe /f Reg add "HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\OpenWithList" /v b /d wmplayer.exe /f Reg add HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\OpenWithList" /v MRUList /d ab /f Reg add HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\OpenWithProgids" /v Quicktime.mov /t REG_NONE /d 0000 /f Reg add HKCU\Software\Microsoft\windows\CurrentVersion\Explorere\FileExts\.mov\OpenWithProgids" /v WMP11.AssocFile.MOV /t REG_NONE /d 0000 /f

    Read the article

  • Changing memory allocator to Jemalloc Centos 6

    - by Brian Lovett
    After reading this blog post about the impact of memory allocators like jemalloc on highly threaded applications, I wanted to test things on a larger scale on some of our cluster of servers. We run sphinx, and apache using threads, and on 24 core machines. Installing jemalloc was simple enough. We are running Centos 6, so yum install jemalloc jemalloc-devel did the trick. My question is, how do we change everything on the system over to using jemalloc instead of the default malloc built into Centos. Research pointed me at this as a potential option: LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 Would this be sufficient to get everything using jemalloc?

    Read the article

  • Cloning Win 7 installation from MBR to GPR drive and make it bootable

    - by Nelluk
    I've seen threads on similar topics - such as this one - but the answers never seem to solve how to make it bootable. I have Win 7 64-bit on a PC installed on a 2tb MBR volume. The motherboard is UEFI compatible. I just installed a secondary internal 3TB drive which will be partitioned as GPT. Is there a relatively easy way to clone my installation over to the new drive and have that drive be bootable? I have used EaseUS Partition Master to clone the C volume to the D volume, but that would not boot and I assume the issue is that one is MBR and one is GPT. Is there a process to do this?

    Read the article

  • Getting "-bash: fork: Resource temporarily unavailable" in OSX

    - by Joseph Tura
    I seem to run into problems with the max. number of processes every so often. Anyone know what is best practice for fixing this? Running OSX 10.6 on a MacBook Pro i7. ulimit -a returns these values: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited When the error occurred I checked, and there were 102 running tasks and 523 threads.

    Read the article

  • Errors related to python version added to error log when I start apache2

    - by Jean-Nicolas Boulay Desjardins
    When I start apache I am getting those errors: [Tue Jun 14 02:28:58 2011] [error] python_init: Python version mismatch, expected '2.6.5', found '2.6.6'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python executable found '/usr/bin/python'. [Tue Jun 14 02:28:58 2011] [error] python_init: Python path being used '/usr/lib/python2.6/:/usr/lib/python2.6/plat-linux2:/usr/lib/python2.6/lib-tk:/usr/lib/python2.6/lib-old:/usr/lib/python2.6/lib-dynload'. [Tue Jun 14 02:28:58 2011] [notice] mod_python: Creating 8 session mutexes based on 150 max processes and 0 max threads. [Tue Jun 14 02:28:58 2011] [notice] mod_python: using mutex_directory /tmp [Tue Jun 14 02:28:58 2011] [notice] Apache/2.2.16 (Ubuntu) PHP/5.3.3-1ubuntu9.5 with Suhosin-Patch mod_python/3.3.1 Python/2.6.6 configured -- resuming normal operations I am using Ubuntu Server... Thanks in advance for any help.

    Read the article

  • Quartz Thread Execution Parallel or Sequential?

    - by vikas
    We have a quartz based scheduler application which runs about 1000 jobs per minute which are evenly distributed across seconds of each minute i.e. about 16-17 jobs per second. Ideally, these 16-17 jobs should fire at same time, however our first statement, which simply logs the time of execution, of execute method of the job is being called very late. e.g. let us assume we have 1000 jobs scheduled per minute from 05:00 to 05:04. So, ideally the job which is scheduled at 05:03:50 should have logged the first statement of the execute method at 05:03:50, however, it is doing it at about 05:06:38. I have tracked down the time taken by the scheduled job which comes around 15-20 milliseconds. This scheduled job is fast enough because we just send a message on an ActiveMQ queue. We have specified the number of threads of quartz to be 100 and even tried with increasing it to 200 and more, but no gain. One more thing we noticed is that logs from scheduler are coming sequential after first 1 minute i.e. [Quartz_Worker_28] <Some log statement> .. .. [Quartz_Worker_29] <Some log statement> .. .. [Quartz_Worker_30] <Some log statement> .. .. So it suggesting that after some time quartz is running threads almost sequential. May be this is happening due to the time taken in notifying the job completion to persistence store (which is a separate postgres database in this case) and/or context switching. What can be the reason behind this strange behavior? EDIT: More detailed Log [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] fired job [<job_name>] scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute begin--------- ScheduledLocateJob with key: <job_name> started at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:192][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob <some log statement> [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] <my_package>.scheduler.quartz.ScheduledLocateJob - execute end--------- ScheduledLocateJob with key: <job_name> ended at Fri Jul 06 10:08:37 EDT 2012 [06/07/12 10:08:37:220][QuartzScheduler_Worker-34][INFO] org.quartz.plugins.history.LoggingTriggerHistoryPlugin - Trigger [<trigger_name>] completed firing job [<job_name>] with resulting trigger instruction code: DO NOTHING. Next scheduled at: 06-07-2012 10:34:53.000 I am doubting on this section of the above log scheduled at: 06-07-2012 10:08:33.458, next scheduled at: 06-07-2012 10:34:53.000 because this job was scheduled for 10:04:53, but it fired at 10:08:33 and still quartz didn't consider it as misfire. Shouldn't it be a misfire?

    Read the article

  • Windows 7 Stopped Using hosts file for DNS Resolution

    - by AJ
    I am running Windows 7 Home Premium 64-bit. Starting today, I noticed that DNS resolution is not reading my %SYSTEMROOT%\System32\drivers\etc\hosts file. I say this because I added two new entries to the file and when I run 'nslookup' on the command line, they don't resolve. Further, just trying to resolve 'localhost' results in my primary DNS server being queried. I've read several threads that suggest that the file might have been corrupted and to move it aside and create a new one. I've done that, and no improvement. Is there some sort of registry key that controls the sequence of resources used for DNS resolution (similar to nsswitch.conf on UNIX)? What else could be causing this? Thanks in advance.

    Read the article

  • httpd memory could not be written on winxp

    - by Shawn
    I have a apache server on a winxp box, ocassionaly I got a "httpd error, memory could not be written" error, here is what I found in the apache error-log `[Sat Sep 12 10:58:34 2009] [error] [client 113.68.84.79] Invalid URI in request ;\xece\r\xd5m\xed{\xbcf\xbf\xffq\bZNB\xf0a\xf9\x13\xf3[\x06Y\x02G\xca\xc5\xf3\x9ft\x89b\xed\xb5m\x9f\x1c\xa6\x03\x10\xee\xe9G\xb5\xe0glLf\xd4eFT\x8f.{Ysl\x89\x05\x18\x0f\x0fp\xdd\xaf\x11G\xbe\xbf\x96/Pr\x9e\xf4\x89\xf2\xd4^mA\x13y2\xe3\x95\xaeD\x02\xa7*G\xe4\x1d\x07r^\xaf_J\xf7\xbc\x90\x17\xda\x90\x17\xec\xd4\xe8\xe4\xfcU\x04\xbc2V\xe1\x170\xeb Error in my_thread_global_end(): 66 threads didn't exit [Sat Sep 12 11:08:43 2009] [notice] Parent: child process exited with status 3221225477 -- Restarting. [Sat Sep 12 11:08:51 2009] [notice] Apache/2.2.4 (Win32) PHP/5.2.3 configured -- resuming normal operations` Anybody can tell what this means and where the problem is ? Thanks.

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • Java Synchronized List Deadlock

    - by portoalet
    From Effective Java 2nd edition item 67 page 266-268: The background thread calls s.removeObserver, which attempts to lock observers, but it can’t acquire the lock, because the main thread already has the lock. All the while, the main thread is waiting for the background thread to finish removing the observer, which explains the deadlock. I am trying to find out which threads deadlock in the main method by using ThreadMXBean (http://stackoverflow.com/questions/1102359/programmatic-deadlock-detection-in-java) , but why does it not return the deadlocked threads? I used a new Thread to run the ThreadMXBean detection. public class ObservableSet<E> extends ForwardingSet<E> { public ObservableSet(Set<E> set) { super(set); } private final List<SetObserver<E>> observers = new ArrayList<SetObserver<E>>(); public void addObserver(SetObserver<E> observer) { synchronized(observers) { observers.add(observer); } } public boolean removeObserver(SetObserver<E> observer) { synchronized(observers) { return observers.remove(observer); } } private void notifyElementAdded(E element) { synchronized(observers) { for (SetObserver<E> observer : observers) observer.added(this, element); } } @Override public boolean add(E element) { boolean added = super.add(element); if (added) notifyElementAdded(element); return added; } @Override public boolean addAll(Collection<? extends E> c) { boolean result = false; for (E element : c) result|=add(element); //callsnotifyElementAdded return result; } public static void main(String[] args) { ObservableSet<Integer> set = new ObservableSet<Integer>(new HashSet<Integer>()); final ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean(); Thread t = new Thread(new Runnable() { @Override public void run() { while( true ) { long [] threadIds = threadMxBean.findDeadlockedThreads(); if( threadIds != null) { ThreadInfo[] infos = threadMxBean.getThreadInfo(threadIds); for( ThreadInfo threadInfo : infos) { StackTraceElement[] stacks = threadInfo.getStackTrace(); for( StackTraceElement stack : stacks ) { System.out.println(stack.toString()); } } } try { System.out.println("Sleeping.."); TimeUnit.MILLISECONDS.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }); t.start(); set.addObserver(new SetObserver<Integer>() { public void added(ObservableSet<Integer> s, Integer e) { ExecutorService executor = Executors.newSingleThreadExecutor(); final SetObserver<Integer> observer = this; try { executor.submit(new Runnable() { public void run() { s.removeObserver(observer); } }).get(); } catch (ExecutionException ex) { throw new AssertionError(ex.getCause()); } catch (InterruptedException ex) { throw new AssertionError(ex.getCause()); } finally { executor.shutdown(); } } }); for (int i = 0; i < 100; i++) set.add(i); } } public interface SetObserver<E> { // Invoked when an element is added to the observable set void added(ObservableSet<E> set, E element); } // ForwardingSet<E> simply wraps another Set and forwards all operations to it.

    Read the article

  • Apache keeps resetting while testing on localhost...

    - by Scott
    Hello everyone. I'm getting errors while testing web pages on localhost. I'm running Windows 7 64-bit. I'm not using Wamp or Xampp. This is what the error.log tells me (I've highlighted the errors in question): [Sat Mar 06 05:10:55 2010] [notice] Apache/2.2.14 (Win32) PHP/5.2.13 configured -- resuming normal operations [Sat Mar 06 05:10:55 2010] [notice] Server built: Sep 28 2009 22:41:08 [Sat Mar 06 05:10:55 2010] [notice] Parent: Created child process 6588 httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Child process is running [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Acquired the start mutex. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting 1000 worker threads. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting thread to listen on port 80. Any input would be greatly appreciated. Thanks.

    Read the article

  • Windows shuts down unexpectedly when waking up from sleep

    - by Kush
    I've been facing this issue since yesterday, whenever I put the computer into sleep mode and when I wake up, it takes me to boot menu with choices; start Windows normally, safe mode, etc. in short, it shuts down unexpectedly while waking up. I have my laptop dual booted with up-to-date Windows 7 SP1 (32-bit) and Ubuntu 10.10. This problem is not happening with Ubuntu. I googled the issue and went through this, this and this page. But, none of the threads were helpful to solve it. I've found that there's something to do with device drivers. What can be done to resolve the issue? Can SFC utility solve it if it is due to corrupt system files?

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >