Search Results

Search found 11380 results on 456 pages for 'cpu speed'.

Page 391/456 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

  • Banning by IP with php/mysql

    - by incrediman
    I want to be able to ban users by IP. My idea is to keep a list of IP's as rows in an BannedIPs table (the IP column would be an index). To check users' IP's against the table, I will keep a session variable called $_SESSION['IP'] for each session. If on any request, $_SESSION['IP'] doesn't match $_SERVER['REMOTE_ADDR'], I will update $_SESSION['IP'] and check the BannedIPs table to see if the IP is banned. (A flag will also be saved as a session variable specifying whether or not the user is banned) Here are the things I'm wondering: Does that sound like a good strategy with regards to speed and security (would someone be able to get around the IP ban somehow, other than changing IP's)? What's the best way to structure a mysql query that checks to see if a row exists? That is, what's the best way to query the db to see if a row with a certain IP exists (to check if it's banned)? Should I save the IP's as integers or strings? Note that... I estimate there will be between 1,000-10,000 banned IP's stored in the database. $_SERVER['REMOTE_ADDR'] is the IP from which the current request was sent.

    Read the article

  • Can a GeneralPath be modified?

    - by Dov
    java2d is fairly expressive, but requires constructing lots of objects. In contrast, the older API would let you call methods to draw various shapes, but lacks all the new features like transparency, stroke, etc. Java has fairly high costs associated with object creation. For speed, I would like to create a GeneralPath whose structure does not change, but go in and change the x,y points inside. path = new GeneralPath(GeneralPath.WIND_EVEN_ODD, 10); path.moveTo(x,y); path.lineTo(x2, y2); double len = Math.sqrt((x2-x)*(x2-x) + (y2-y)*(y2-y)); double dx = (x-x2) * headLen / len; double dy = (y-y2) * headLen / len; double dx2 = -dy * (headWidth/headLen); double dy2 = dx * (headWidth/headLen); path.lineTo(x2 + dx + dx2, y2 + dy + dy2); path.moveTo(x2 + dx - dx2, y2 + dy - dy2); path.lineTo(x2,y2); This one isn't even that long. Imagine a much longer sequence of commands, and only the ones on the end are changing. I just want to be able to overwrite commands, to have an iterator effectively. Does that exist?

    Read the article

  • How test a Delphi app with Application Verifier 4.0?

    - by mamcx
    I download the Application Verifier 4.0 to test my App for check if could have problems on Vista/7. I run from Delphi 2010 debugger, and stop in CPU view. Obviously, I don't understand anything about assembler!. So, I try running directly from the windows explorer, and the App die. (In fact, I don't understand well what exactly will do App Verifier: I expect some kind of friendly message). This is what i get: 7C81A3E2 C3 ret 7C81A3E3 90 nop 7C81A3E4 8BFF mov edi,edi ntdll.DbgUserBreakPoint: 7C81A3E6 CC int 3 7C81A3E7 C3 ret 7C81A3E8 8BFF mov edi,edi 7C81A3EA 8B442404 mov eax,[esp+$04] 7C81A3EE CC int 3 7C81A3EF C20400 ret $0004 ntdll.NtCurrentTeb: 7C81A3F2 64A118000000 mov eax, fs:[$00000018] 7C81A3F8 C3 ret ntdll.RtlInitString: 7C81A3F9 57 push edi Loading: :7c81a3e2 ntdll.DbgBreakPoint + 0x1 :10003b68 ; C:\WINDOWS\system32\vrfcore.dll :00396a9d ; C:\WINDOWS\system32\vfbasics.dll :00397316 ; C:\WINDOWS\system32\vfbasics.dll :7c84bcdb ; ntdll.dll :7c8316f8 ; ntdll.dll :7c83154f ; ntdll.dll :7c82855e ntdll.KiUserExceptionDispatcher + 0xe :0040aa00 GetUILanguages + $80 :0040b298 GetResourceModuleName + $124 :0040afde LoadResourceModule + $7A :0040a134 DelayLoadResourceModule + $2C :00406c40 @StartExe + $44 :77e6f23b ; C:\WINDOWS\system32\KERNEL32.dll

    Read the article

  • Programmatically talking to a Serial Port in OS X or Linux

    - by deadprogrammer
    I have a Prolite LED sign that I like to set up to show scrolling search queries from a apache logs and other fun statistics. The problem is, my G5 does not have a serial port, so I have to use a usb to serial dongle. It shows up as /dev/cu.usbserial and /dev/tty.usbserial . When i do this everything seems to be hunky-dory: stty -f /dev/cu.usbserial speed 9600 baud; lflags: -icanon -isig -iexten -echo iflags: -icrnl -ixon -ixany -imaxbel -brkint oflags: -opost -onlcr -oxtabs cflags: cs8 -parenb Everything also works when I use the serial port tool to talk to it. If I run this piece of code while the above mentioned serial port tool, everthing also works. But as soon as I disconnect the tool the connection gets lost. #!/usr/bin/python import serial ser = serial.Serial('/dev/cu.usbserial', 9600, timeout=10) ser.write("<ID01><PA> \r\n") read_chars = ser.read(20) print read_chars ser.close() So the question is, what magicks do I need to perform to start talking to the serial port without the serial port tool? Is that a permissions problem? Also, what's the difference between /dev/cu.usbserial and /dev/tty.usbserial?

    Read the article

  • jQuery image preload/cache halting browser

    - by Nathan Loding
    In short, I have a very large photo gallery and I'm trying to cache as many of the thumbnail images as I can when the first page loads. There could be 1000+ thumbnails. First question -- is it stupid to try to preload/cache that many? Second question -- when the preload() function fires, the entire browser stops responding for a minute to two. At which time the callback fires, so the preload is complete. Is there a way to accomplish "smart preloading" that doesn't impede on the user experience/speed when attempting to load this many objects? The $.preLoadImages function is take from here: http://binarykitten.me.uk/dev/jq-plugins/107-jquery-image-preloader-plus-callbacks.html Here's how I'm implementing it: $(document).ready(function() { setTimeout("preload()", 5000); }); function preload() { var images = ['image1.jpg', ... 'image1000.jpg']; $.preLoadImages(images, function() { alert('done'); }); } 1000 images is a lot. Am I asking too much?

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • Serialization Performance and Google Android

    - by Jomanscool2
    I'm looking for advice to speed up serialization performance, specifically when using the Google Android. For a project I am working on, I am trying to relay a couple hundred objects from a server to the Android app, and am going through various stages to get the performance I need. First I tried a terrible XML parser that I hacked together using Scanner specifically for this project, and that caused unbelievably slow performance when loading the objects (~5 minutes for a 300KB file). I then moved away from that and made my classes implement Serializable and wrote the ArrayList of objects I had to a file. Reading that file into the objects the Android, with the file already downloaded mind you, was taking ~15-30 seconds for the ~100KB serialized file. I still find this completely unacceptable for an Android app, as my app requires loading the data when starting the application. I have read briefly about Externalizable and how it can increase performance, but I am not sure as to how one implements it with nested classes. Right now, I am trying to store an ArrayList of the following class, with the nested classes below it. public class MealMenu implements Serializable{ private String commonsName; private long startMillis, endMillis, modMillis; private ArrayList<Venue> venues; private String mealName; } And the Venue class: public class Venue implements Serializable{ private String name; private ArrayList<FoodItem> foodItems; } And the FoodItem class: public class FoodItem implements Serializable{ private String name; private boolean vegan; private boolean vegetarian; } IF Externalizable is the way to go to increase performance, is there any information as to how java calls the methods in the objects when you try to write it out? I am not sure if I need to implement it in the parent class, nor how I would go about serializing the nested objects within each object.

    Read the article

  • What is the correct way to create dynamic javascript in ASP.net MVC2?

    - by sabbour
    I'm creating a Google Maps partial view/user control in my project that is passed a strongly typed list of objects containing latitude and longitude values. Currently, this is the code I have for the partial: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<IEnumerable<Project.Models.Entities.Location>>" %> <!-- Place for google to put the map --> <div id="report_map_canvas" style="width: 100%; height: 728px; margin-bottom: 2px;"> </div> <script type='text/javascript'> google.load("maps", "2"); $(document).ready(initializeMap); function initializeMap() { if (GBrowserIsCompatible()) { var map = new GMap2(document.getElementById('report_map_canvas')); map.setCenter(new GLatLng(51.5, -0.1167), 2); <% foreach (var item in Model) { %> map.addOverlay(new GMarker(new GLatLng('<%= Html.Encode(item.latitude)%>','<%= Html.Encode(item.longitude)%>'),{ title: '<%= Html.Encode(String.Format("{0:F}",item.speed)) %> km/h '})); <% } %> map.setUIToDefault(); } } </script> Is it right to dynamically create the javascript file this way by looping over the list and emitting javascript? Is there a better way to do it?

    Read the article

  • JQuery Cycle fails on Page Refresh

    - by Darknight
    In a similar issue as this one: http://stackoverflow.com/questions/1719475/jquery-cycle-firefox-squishing-images I've managed to overcome the initial problem using Jeffs answer in the above link. However now I have noticed a new bug, upon page refresh it simply does not work. I have tried a hard refresh (ctrl+F5) but this does not work. However when you come page to the page it loads fine. here is my modified version (taken from Jeff's): <script type="text/javascript"> $(document).ready(function() { var imagesRemaining = $('#slideshow img').length; $('#slideshow img').bind('load', function(e) { imagesRemaining = imagesRemaining - 1; if (imagesRemaining == 0) { $('#slideshow').show(); $('#slideshow').cycle({ fx: 'shuffle', speed: 1200 }); } }); }); </script> Any ideas? I've also tried JQuery Live but could not implement it correctly. I've also tried Meta tags to force images to load. But it only works first time round.

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Using prepared statements with JDBCTemplate

    - by Bernhard V
    Hi. I'm using the Jdbc template and want to read from the database using prepared statements. I iterate over many lines in a csv file and on every line I execute some sql select queries with it's values. Now I want to speed up my reading from the database but I just can't get the Jdbc template to work with prepared statements. Actually I even don't know how to do it. There is the PreparedStatementCreator and the PreparedStatementCreator. As in this example both of them are created with anonymous inner classes. But inside the PreparedStatementCreator class I don't have access to the values I want to set in the prepared statement. Since I'm iterating through a csv file I can't hard code them as a String because I don't know them. I also can't pass them to the PreparedStatementCreator because there are no arguments for the constructor. I was used to the creation of prepared statements being fairly simple. Something like PreparedStatement updateSales = con.prepareStatement( "UPDATE COFFEES SET SALES = ? WHERE COF_NAME LIKE ? "); updateSales.setInt(1, 75); updateSales.setString(2, "Colombian"); updateSales.executeUpdate(): as in the Java tutorial. Your help would be very appreciated.

    Read the article

  • Does anyone know of a good Commercial WPF Web Browser Control?

    - by VoidDweller
    I have an MDI WPF app that I need to add web content too. At first, great it looks like I have 2 options built into the framework the Frame control and the WebBrowser control. Given that this is an MDI app it doesn't take long to discover that neither of these will work. The WebBrowser control wraps up the IE WebBrowser ActiveX Control which uses the Win32 graphics pipeline. The "Airspace" issue pretty much sums this up as "Sorry, the layouts will not play nice together". Yes, I have thought about taking snapshots of the web content rendering these and mapping the mouse and keyboard events back to the browser control, but I can't afford the performance penalty and I really don't have time to write and thoroughly test it. I have looked for third party controls, but so far I have only found Chris Cavanagh's WPF Chromium Web Browser control. Which wraps up Awesomium 1.5. Together these are very cool, they play nice with the WPF layouts. But they do not meet my performance requirements. They are VERY HEAVY on memory consumption and not to friendly with CPU usage either. Not to mention still quite buggy. I'll elaborate if you are interested. So, do any of you know of a stable performant WPF web browser control? Thanks.

    Read the article

  • Direct invocation vs indirect invocation in C

    - by Mohit Deshpande
    I am new to C and I was reading about how pointers "point" to the address of another variable. So I have tried indirect invocation and direct invocation and received the same results (as any C/C++ developer could have predicted). This is what I did: int cost; int *cost_ptr; int main() { cost_ptr = &cost; //assign pointer to cost cost = 100; //intialize cost with a value printf("\nDirect Access: %d", cost); cost = 0; //reset the value *cost_ptr = 100; printf("\nIndirect Access: %d", *cost_ptr); //some code here return 0; //1 } So I am wondering if indirect invocation with pointers has any advantages over direct invocation or vice-versa. Some advantages/disadvantages could include speed, amount of memory consumed performing the operation (most likely the same but I just wanted to put that out there), safeness (like dangling pointers) , good programming practice, etc. 1Funny thing, I am using the GNU C Compiler (gcc) and it still compiles without the return statement and everything is as expected. Maybe because the C++ compiler will automatically insert the return statement if you forget.

    Read the article

  • getting SIGSEGV in std::_List_const_iterator<Exiv2::Exifdatum>::operator++ whilst using jni

    - by HJED
    Hi I'm using jni to access the exiv2 API in my Java project and I'm getting a SIGSEGV error in std::_List_const_iterator::operator++. I'm uncertain how to fix this error. I've tried using high -Xmx values as well as running on both jdk1.6.0 (server and cacao JVMs) and 1.7.0 (server JVM). gdb traceback: #0 0x00007fffa36f2363 in std::_List_const_iterator<Exiv2::Exifdatum>::operator++ (this=0x7ffff7fd3500) at /usr/include/c++/4.4/bits/stl_list.h:223 #1 0x00007fffa36f2310 in std::__distance<std::_List_const_iterator<Exiv2::Exifdatum> > (__first=..., __last=...) at /usr/include/c++/4.4/bits/stl_iterator_base_funcs.h:79 #2 0x00007fffa36f224d in std::distance<std::_List_const_iterator<Exiv2::Exifdatum> > (__first=..., __last=...) at /usr/include/c++/4.4/bits/stl_iterator_base_funcs.h:114 #3 0x00007fffa36f1f27 in std::list<Exiv2::Exifdatum, std::allocator<Exiv2::Exifdatum> >::size (this=0x7fffa4030910) at /usr/include/c++/4.4/bits/stl_list.h:805 #4 0x00007fffa36f1d50 in Exiv2::ExifData::count (this=0x7fffa4030910) at /usr/local/include/exiv2/exif.hpp:518 #5 0x00007fffa36f1d30 in Exiv2::ExifData::empty (this=0x7fffa4030910) at /usr/local/include/exiv2/exif.hpp:516 #6 0x00007fffa36f1763 in getVars (path=0x7fffa401d2f0 "/home/hjed/PC100001.JPG", env=0x6131c8, obj=0x7ffff7fd37a8) at src/main.cpp:146 #7 0x00007fffa36f19d8 in Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv (env=0x6131c8, obj=0x7ffff7fd37a8, path=0x7ffff7fd37a0, obj2=0x7ffff7fd3798) at src/main.cpp:160 #8 0x00007ffff21d9cc8 in ?? () #9 0x00000000fffffffe in ?? () #10 0x00007ffff7fd3740 in ?? () #11 0x0000000000613000 in ?? () #12 0x00007ffff7fd3738 in ?? () #13 0x00007fffaa1076e0 in ?? () #14 0x00007ffff7fd37a8 in ?? () #15 0x00007fffaa108d10 in ?? () #16 0x0000000000000000 in ?? () Java error: # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fac11223363, pid=11905, tid=140378349111040 # # JRE version: 6.0_20-b20 # Java VM: OpenJDK 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 ) # Derivative: IcedTea6 1.9.2 # Distribution: Ubuntu 10.10, package 6b20-1.9.2-0ubuntu2 # Problematic frame: # C [libExiff2-binding.so+0x4363] _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf # # If you would like to submit a bug report, please include # instructions how to reproduce the bug and visit: # https://bugs.launchpad.net/ubuntu/+source/openjdk-6/ # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # --------------- T H R E A D --------------- Current thread (0x0000000000dbf000): JavaThread "main" [_thread_in_native, id=11909, stack(0x00007fac61920000,0x00007fac61a21000)] siginfo:si_signo=SIGSEGV: si_errno=0, si_code=128 (), si_addr=0x0000000000000000 Registers: ... Register to memory mapping: RAX=0x6c8948f0245c8948 0x6c8948f0245c8948 is pointing to unknown location RBX=0x00007fac0c042c00 0x00007fac0c042c00 is pointing to unknown location RCX=0x0000000000000000 0x0000000000000000 is pointing to unknown location RDX=0x6c8948f0245c8948 0x6c8948f0245c8948 is pointing to unknown location RSP=0x00007fac61a1f4e0 0x00007fac61a1f4e0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RBP=0x00007fac61a1f4e0 0x00007fac61a1f4e0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RSI=0x00007fac61a1f4f0 0x00007fac61a1f4f0 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE RDI=0x00007fac61a1f500 0x00007fac61a1f500 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R8 =0x00007fac0c054630 0x00007fac0c054630 is pointing to unknown location R9 =0x00007fac61a1f358 0x00007fac61a1f358 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R10=0x00007fac61a1f270 0x00007fac61a1f270 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R11=0x00007fac11223354 0x00007fac11223354: _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0 in /home/hjed/libExiff2-binding.so at 0x00007fac1121f000 R12=0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R13=0x00007fac13ad1be8 {method} - klass: {other class} R14=0x00007fac61a1f7a8 0x00007fac61a1f7a8 is pointing into the stack for thread: 0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE R15=0x0000000000dbf000 "main" prio=10 tid=0x0000000000dbf000 nid=0x2e85 runnable [0x00007fac61a1f000] java.lang.Thread.State: RUNNABLE Top of Stack: (sp=0x00007fac61a1f4e0) ... Instructions: (pc=0x00007fac11223363) ... Stack: [0x00007fac61920000,0x00007fac61a21000], sp=0x00007fac61a1f4e0, free space=1021k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libExiff2-binding.so+0x4363] _ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf C [libExiff2-binding.so+0x4310] _ZSt10__distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_St18input_iterator_tag+0x26 C [libExiff2-binding.so+0x424d] _ZSt8distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_+0x36 C [libExiff2-binding.so+0x3f27] _ZNKSt4listIN5Exiv29ExifdatumESaIS1_EE4sizeEv+0x33 C [libExiff2-binding.so+0x3d50] _ZNK5Exiv28ExifData5countEv+0x18 C [libExiff2-binding.so+0x3d30] _ZNK5Exiv28ExifData5emptyEv+0x18 C [libExiff2-binding.so+0x3763] _Z7getVarsPKcP7JNIEnv_P8_jobject+0x3e3 C [libExiff2-binding.so+0x39d8] Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv+0x4b j photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V+0 j photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V+9 j photo.exiv2.Exiv2MetaDataStore.loadData()V+1 j photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V+10 j photo.ImageFile.<init>(Ljava/lang/String;)V+11 j test.Main.main([Ljava/lang/String;)V+67 v ~StubRoutines::call_stub V [libjvm.so+0x428698] V [libjvm.so+0x4275c8] V [libjvm.so+0x432943] V [libjvm.so+0x447f91] C [java+0x3495] JavaMain+0xd75 Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V+0 j photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V+9 j photo.exiv2.Exiv2MetaDataStore.loadData()V+1 j photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V+10 j photo.ImageFile.<init>(Ljava/lang/String;)V+11 j test.Main.main([Ljava/lang/String;)V+67 v ~StubRoutines::call_stub --------------- P R O C E S S --------------- Java Threads: ( => current thread ) 0x00007fac0c028000 JavaThread "Low Memory Detector" daemon [_thread_blocked, id=11924, stack(0x00007fac11532000,0x00007fac11633000)] 0x00007fac0c025800 JavaThread "CompilerThread1" daemon [_thread_blocked, id=11923, stack(0x00007fac11633000,0x00007fac11734000)] 0x00007fac0c022000 JavaThread "CompilerThread0" daemon [_thread_blocked, id=11922, stack(0x00007fac11734000,0x00007fac11835000)] 0x00007fac0c01f800 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=11921, stack(0x00007fac11835000,0x00007fac11936000)] 0x00007fac0c001000 JavaThread "Finalizer" daemon [_thread_blocked, id=11920, stack(0x00007fac11e2d000,0x00007fac11f2e000)] 0x0000000000e36000 JavaThread "Reference Handler" daemon [_thread_blocked, id=11919, stack(0x00007fac11f2e000,0x00007fac1202f000)] =>0x0000000000dbf000 JavaThread "main" [_thread_in_native, id=11909, stack(0x00007fac61920000,0x00007fac61a21000)] Other Threads: 0x0000000000e2f800 VMThread [stack: 0x00007fac1202f000,0x00007fac12130000] [id=11918] 0x00007fac0c02b000 WatcherThread [stack: 0x00007fac11431000,0x00007fac11532000] [id=11925] ... Heap PSYoungGen total 18432K, used 632K [0x00007fac47210000, 0x00007fac486a0000, 0x00007fac5bc10000) eden space 15808K, 4% used [0x00007fac47210000,0x00007fac472ae188,0x00007fac48180000) from space 2624K, 0% used [0x00007fac48410000,0x00007fac48410000,0x00007fac486a0000) to space 2624K, 0% used [0x00007fac48180000,0x00007fac48180000,0x00007fac48410000) PSOldGen total 42240K, used 0K [0x00007fac1de10000, 0x00007fac20750000, 0x00007fac47210000) object space 42240K, 0% used [0x00007fac1de10000,0x00007fac1de10000,0x00007fac20750000) PSPermGen total 21248K, used 2831K [0x00007fac13810000, 0x00007fac14cd0000, 0x00007fac1de10000) object space 21248K, 13% used [0x00007fac13810000,0x00007fac13ad3d80,0x00007fac14cd0000) Dynamic libraries: ... VM Arguments: jvm_args: -Dfile.encoding=UTF-8 java_command: test.Main Launcher Type: SUN_STANDARD Environment Variables: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games USERNAME=hjed LD_LIBRARY_PATH=/usr/lib/jvm/java-6-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-6-openjdk/jre/lib/amd64:/usr/lib/jvm/java-6-openjdk/jre/../lib/amd64 SHELL=/bin/bash DISPLAY=:0.0 Signal Handlers: ... --------------- S Y S T E M --------------- OS:Ubuntu 10.10 (maverick) uname:Linux 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 libc:glibc 2.12.1 NPTL 2.12.1 rlimit: STACK 8192k, CORE 0k, NPROC infinity, NOFILE 1024, AS infinity load average:0.27 0.31 0.30 /proc/meminfo: MemTotal: 4048200 kB MemFree: 106552 kB Buffers: 838212 kB Cached: 1172496 kB SwapCached: 0 kB Active: 1801316 kB Inactive: 1774880 kB Active(anon): 1224708 kB Inactive(anon): 355012 kB Active(file): 576608 kB Inactive(file): 1419868 kB Unevictable: 64 kB Mlocked: 64 kB SwapTotal: 7065596 kB SwapFree: 7065596 kB Dirty: 20 kB Writeback: 0 kB AnonPages: 1565608 kB Mapped: 213424 kB Shmem: 14216 kB Slab: 164812 kB SReclaimable: 102576 kB SUnreclaim: 62236 kB KernelStack: 4784 kB PageTables: 44908 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 9089696 kB Committed_AS: 3676872 kB VmallocTotal: 34359738367 kB VmallocUsed: 332952 kB VmallocChunk: 34359397884 kB HardwareCorrupted: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 48704 kB DirectMap2M: 4136960 kB CPU:total 8 (4 cores per cpu, 2 threads per core) family 6 model 26 stepping 5, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht Memory: 4k page, physical 4048200k(106552k free), swap 7065596k(7065596k free) vm_info: OpenJDK 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_20-b20), built on Dec 10 2010 19:45:55 by "buildd" with gcc 4.4.5 main.cpp: jobject toJava(std::auto_ptr<Exiv2::Value> v, const char * type, JNIEnv * env) { jclass stringClass; jmethodID cid; jobject result; stringClass = env->FindClass("photo/exiv2/Value"); cid = env->GetMethodID(stringClass, "<init>", "(Ljava/lang/String;Ljava/lang/Object;)V"); jvalue val; if ((strcmp(type, "String") == 0) || (strcmp(type, "String") == 0)) { val.l = env->NewStringUTF(v->toString().c_str()); } else if (strcmp(type, "Short") == 0) { val.s = v->toLong(0); } else if (strcmp(type, "Long") == 0) { val.j = v->toLong(0); } result = env->NewObject(stringClass, cid, env->NewStringUTF(v->toString().c_str()), val); return result; } void inLoop(std::auto_ptr<MetadataContainer> md, JNIEnv * env, jmethodID mid, jobject obj) { jvalue values[2]; const char* key = md->key().c_str(); values[0].l = env->NewStringUTF(key); /** md->value().toString().c_str(); const char* value = md->typeName(); values[1].l = env->NewStringUTF(value); TODO: do type conversions */ //std::cout << md->typeName() << std::endl; /** const char* type = md->value().toString().c_str(); values[1].l = env->NewStringUTF(type);*/ values[1].l = toJava(md->getValue(), md->typeName(), env); env->CallVoidMethodA(obj, mid, values); } void getVars(const char* path, JNIEnv * env, jobject obj) { //Load image Exiv2::Image::AutoPtr image = Exiv2::ImageFactory::open(path); assert(image.get() != 0); image->readMetadata(); //load method jclass cls = env->GetObjectClass(obj); jmethodID mid = env->GetMethodID(cls, "exiv2_reciveElement", "(Ljava/lang/String;Lphoto/exiv2/Value;)V"); //Load IPTC data /**loadIPTC(image, path, env, obj, mid); loadEXIF(image, path, env, obj, mid);*/ Exiv2::IptcData &iptcData = image->iptcData(); if (mid != NULL) { //is there any IPTC data AND check that method exists if (iptcData.empty()) { std::string error(path); error += ": failed loading IPTC data, there may not be any data"; } else { Exiv2::IptcData::iterator end = iptcData.end(); for (Exiv2::IptcData::iterator md = iptcData.begin(); md != end; ++md) { std::auto_ptr<MetadataContainer> meta(new MetadataContainer(md)); inLoop(meta, env, mid, obj); } } Exiv2::ExifData &exifData = image->exifData(); //is there any Exif data AND check that method exists if (exifData.empty()) { //error occurs here (main.cpp:146) std::string error(path); error += ": failed loading Exif data, there may not be any data"; } else { Exiv2::ExifData::iterator end = exifData.end(); for (Exiv2::ExifData::iterator md = exifData.begin(); md != end; ++md) { std::auto_ptr<MetadataContainer> meta(new MetadataContainer(md)); inLoop(meta, env, mid, obj); } } } else { std::string error(path); error += ": failed to load method"; } } JNIEXPORT void JNICALL Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv(JNIEnv * env, jobject obj, jstring path, jobject obj2) { const char* path2 = env->GetStringUTFChars(path, NULL); getVars(path2, env, obj); env->ReleaseStringUTFChars(path, path2); } Thanks for any help, HJED EDIT This is the output when runing the jvm with the -cacao option: run: null:/usr/local/lib Error: Directory Olympus2 with 1536 entries considered invalid; not read. LOG: [0x00007ff005376700] We received a SIGSEGV and tried to handle it, but we were LOG: [0x00007ff005376700] unable to find a Java method at: LOG: [0x00007ff005376700] LOG: [0x00007ff005376700] PC=0x00007feffe4ee67d LOG: [0x00007ff005376700] LOG: [0x00007ff005376700] Dumping the current stacktrace: at photo.exiv2.Exiv2MetaDataStore.impl_loadFromExiv(Ljava/lang/String;Lphoto/exiv2/Exiv2MetaDataStore;)V(Native Method) at photo.exiv2.Exiv2MetaDataStore.loadFromExiv2()V(Exiv2MetaDataStore.java:38) at photo.exiv2.Exiv2MetaDataStore.loadData()V(Exiv2MetaDataStore.java:29) at photo.exiv2.MetaDataStore.<init>(Lphoto/ImageFile;)V(MetaDataStore.java:33) at photo.exiv2.Exiv2MetaDataStore.<init>(Lphoto/ImageFile;)V(Exiv2MetaDataStore.java:20) at photo.ImageFile.<init>(Ljava/lang/String;)V(ImageFile.java:22) at test.Main.main([Ljava/lang/String;)V(Main.java:28) LOG: [0x00007ff005376700] vm_abort: WARNING, port me to C++ and use os::abort() instead. LOG: [0x00007ff005376700] Exiting... LOG: [0x00007ff005376700] Backtrace (15 stack frames): LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x4ff54) [0x7ff004306f54] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x5ac01) [0x7ff004311c01] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x66e9a) [0x7ff00431de9a] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x76408) [0x7ff00432d408] LOG: [0x00007ff005376700] /usr/lib/jvm/java-6-openjdk/jre/lib/amd64/cacao/libjvm.so(+0x79a4c) [0x7ff004330a4c] LOG: [0x00007ff005376700] /lib/libpthread.so.0(+0xfb40) [0x7ff004d53b40] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNSt20_List_const_iteratorIN5Exiv29ExifdatumEEppEv+0xf) [0x7feffe4ee67d] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZSt10__distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_St18input_iterator_tag+0x26) [0x7feffe4ee62a] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZSt8distanceISt20_List_const_iteratorIN5Exiv29ExifdatumEEENSt15iterator_traitsIT_E15difference_typeES5_S5_+0x36) [0x7feffe4ee567] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNKSt4listIN5Exiv29ExifdatumESaIS1_EE4sizeEv+0x33) [0x7feffe4ee22b] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNK5Exiv28ExifData5countEv+0x18) [0x7feffe4ee054] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_ZNK5Exiv28ExifData5emptyEv+0x18) [0x7feffe4ee034] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(_Z7getVarsPKcP7JNIEnv_P8_jobject+0x3d7) [0x7feffe4ed947] LOG: [0x00007ff005376700] /home/hjed/libExiff2-binding.so(Java_photo_exiv2_Exiv2MetaDataStore_impl_1loadFromExiv+0x4b) [0x7feffe4edcdc] LOG: [0x00007ff005376700] [0x7feffe701ccd] Java Result: 134 BUILD SUCCESSFUL (total time: 0 seconds)

    Read the article

  • WF performance with new 20,000 persisted workflow instances each month

    - by Nikola Stjelja
    Windows Workflow Foundation has a problem that is slow when doing WF instances persistace. I'm planning to do a project whose bussiness layer will be based on WF exposed WCF services. The project will have 20,000 new workflow instances created each month, each instance could take up to 2 months to finish. What I was lead to belive that given WF slownes when doing peristance my given problem would be unattainable given performance reasons. I have the following questions: Is this true? Will my performance be crap with that load(given WF persitance speed limitations) How can I solve the problem? We currently have two possible solutions: 1. Each new buisiness process request(e.g. Give me a new drivers license) will be a new WF instance, and the number of persistance operations will be limited by forwarding all status request operations to saved state values in a separate database. 2. Have only a small amount of Workflow Instances up at any give time, without any persistance ofso ever(only in case of system crashes etc.), by breaking each workflow stap in to a separate worklof and that workflow handling each business process request instance in the system that is at that current step(e.g. I'm submitting my driver license reques form, which is step one... we have 100 cases of that, and my step one workflow will handle every case simultaneusly). I'm very insterested in solution for that problem. If you want to discuss that problem pleas be free to mail me at [email protected]

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • How can you start a process from asp.net without interfering with the website?

    - by Sem Dendoncker
    Hi, We have an asp.net application that is able to create .air files. To do this we use the following code: System.Diagnostics.Process process = new System.Diagnostics.Process(); //process.StartInfo.FileName = strBatchFile; if (File.Exists(@"C:\Program Files\Java\jre6\bin\java.exe")) { process.StartInfo.FileName = @"C:\Program Files\Java\jre6\bin\java.exe"; } else { process.StartInfo.FileName = @"C:\Program Files (x86)\Java\jre6\bin\java.exe"; } process.StartInfo.Arguments = GetArguments(); process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.UseShellExecute = false; process.PriorityClass = ProcessPriorityClass.Idle; process.Start(); string strOutput = process.StandardOutput.ReadToEnd(); string strError = process.StandardError.ReadToEnd(); HttpContext.Current.Response.Write(strOutput + "<p>" + strError + "</p>"); process.WaitForExit(); Well the problem now is that sometimes the cpu of the server is reaching 100% causing the application to run very slow and even lose sessions (we think this is the problem). Is there any other solution on how to generate air files or run an external process without interfering with the asp.net application? Cheers, M.

    Read the article

  • LaTex: why partially showing up references?

    - by HH
    The bib.style part may be the problem. If I do not reference to references, do they show up? I have listed all errors below, the file compiles so I don't know whether they are related to partially-showing-up-references. For example, work with many authors gets only one author listed. I want to see references fully, not partially. Headers $ grep bib header.tex \usepackage{natbib} \bibliographystyle{abbrvnat} Errors $ grep -n -A 7 -B 7 Error *.log combined.log-505-! Illegal unit of measure (pt inserted). combined.log-506-<to be read again> combined.log-507- \futurelet combined.log-508-l.353 \hline combined.log-509- combined.log-510-? combined.log-511- combined.log:512:! Package caption Error: cite undefined. combined.log-513- combined.log-514-See the caption package documentation for explanation. combined.log-515-Type H <return> for immediate help. combined.log-516- ... combined.log-517- combined.log-518-l.374 ...n={CPU O(mlog(n))}, cite={topcoder:node}] combined.log-519- -- combined.log-559- [] combined.log-560- combined.log-561-) [10] combined.log-562-\openout2 = `references.aux'. combined.log-563- combined.log-564- (./references.tex combined.log-565- combined.log:566:! LaTeX Error: \include cannot be nested. combined.log-567- combined.log-568-See the LaTeX manual or LaTeX Companion for explanation. combined.log-569-Type H <return> for immediate help. combined.log-570- ... combined.log-571- combined.log-572-l.1 \include{timeUse.tex} Bibs.bib @misc{ Gundersen, author = "G. Gundersen", title = "Data Structures in Java for Matrix Computations", year = "2002" } @book{ Lennart, author = "R. Lennart", title = "Mathematics Handbook for Science and Engineering BETA", year = "2004" }

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • web browser become slow or no response after several ajax calls

    - by Patrick
    I'm totally newbie to jquery and ajax, my recently project is to help the representatives (reps) to manage customer quotations online. I have a page which displays all the quotations in a big table. I've managed to use ajax to fetch and display the quotations which associate to a particular rep after i click that rep's name. But the only problem is the speed of response. The first few clicks are ok and very smooth. But after several tries, the response become slow and I cant even scroll down the webpage, and later on the web browser craches.... Please have a look at my ajax code. here it is: <!-- AJAX FETCH QUOTES DATA + Tablesorter + FIXED TABLE HEADER--> <script type="text/javascript"> //<![CDATA[ $(function(){ $("a.repID").click(function(){ $('div#loader').append("<p align='center'><img src='images/loadingbar2.gif' id='loading' /></p>"); var repID = $(this).attr("title"); $.ajax({ type:'POST', url:'quote_info.php', data:'repID=' + repID, cache: false, success:function(data) { $("#container").html('<div id="content">' + data + '</div>'); $("#loading").fadeOut(500, function() {$(this).remove();}); $("#sortme").tablesorter(); $('.tbl').fixedtableheader(); } }); return false; }); }); </script> <!-- AJAX FETCH QUOTES DATA + Tablesorter + FIXED TABLE HEADER-->

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >