Search Results

Search found 16787 results on 672 pages for 'mod disk cache'.

Page 647/672 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • .Net Custom Configuration Section and Saving Changes within PropertyGrid

    - by Paul
    If I load the My.Settings object (app.config) into a PropertyGrid, I am able to edit the property inside the propertygrid and the change is automatically saved. PropertyGrid1.SelectedObject = My.Settings I want to do the same with a Custom Configuration Section. Following this code example (from here http://www.codeproject.com/KB/vb/SerializePropertyGrid.aspx), he is doing explicit serialization to disk when a "Save" button is pushed. Public Class Form1 'Load AppSettings Dim _appSettings As New AppSettings() Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click _appSettings = AppSettings.Load() ' Actually change the form size Me.Size = _appSettings.WindowSize PropertyGrid1.SelectedObject = _appSettings End Sub Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button2.Click _appSettings.Save() End Sub End Class In my code, my custom section Inherits from ConfigurationSection (see below) Question: Is there something built into ConfigurationSection class that does the autosave? If not, what is the best way to handle this, should it be in the PropertyGrid.PropertyValueChagned? (how does the My.Settings handle this internally?) Here is the example Custom Class that I am trying to get to auto-save and how I load into property grid. Dim config As System.Configuration.Configuration = _ ConfigurationManager.OpenExeConfiguration( _ ConfigurationUserLevel.None) PropertyGrid2.SelectedObject = config.GetSection("CustomSection") Public NotInheritable Class CustomSection Inherits ConfigurationSection ' The collection (property bag) that contains ' the section properties. Private Shared _Properties As ConfigurationPropertyCollection ' The FileName property. Private Shared _FileName As New ConfigurationProperty("fileName", GetType(String), "def.txt", ConfigurationPropertyOptions.IsRequired) ' The MasUsers property. Private Shared _MaxUsers _ As New ConfigurationProperty("maxUsers", _ GetType(Int32), 1000, _ ConfigurationPropertyOptions.None) ' The MaxIdleTime property. Private Shared _MaxIdleTime _ As New ConfigurationProperty("maxIdleTime", _ GetType(TimeSpan), TimeSpan.FromMinutes(5), _ ConfigurationPropertyOptions.IsRequired) ' CustomSection constructor. Public Sub New() _Properties = New ConfigurationPropertyCollection() _Properties.Add(_FileName) _Properties.Add(_MaxUsers) _Properties.Add(_MaxIdleTime) End Sub 'New ' This is a key customization. ' It returns the initialized property bag. Protected Overrides ReadOnly Property Properties() _ As ConfigurationPropertyCollection Get Return _Properties End Get End Property <StringValidator( _ InvalidCharacters:=" ~!@#$%^&*()[]{}/;'""|\", _ MinLength:=1, MaxLength:=60)> _ <EditorAttribute(GetType(System.Windows.Forms.Design.FileNameEditor), GetType(System.Drawing.Design.UITypeEditor))> _ Public Property FileName() As String Get Return CStr(Me("fileName")) End Get Set(ByVal value As String) Me("fileName") = value End Set End Property <LongValidator(MinValue:=1, _ MaxValue:=1000000, ExcludeRange:=False)> _ Public Property MaxUsers() As Int32 Get Return Fix(Me("maxUsers")) End Get Set(ByVal value As Int32) Me("maxUsers") = value End Set End Property <TimeSpanValidator(MinValueString:="0:0:30", _ MaxValueString:="5:00:0", ExcludeRange:=False)> _ Public Property MaxIdleTime() As TimeSpan Get Return CType(Me("maxIdleTime"), TimeSpan) End Get Set(ByVal value As TimeSpan) Me("maxIdleTime") = value End Set End Property End Class 'CustomSection

    Read the article

  • Comments show up in database, but only show up on my index page after a refresh.

    - by Truong
    Hi, I have AJAX, PHP, jquery, and mySQL in this very simple website I'm trying to make. All there is is a text area that sends data to the database and uses ajax\jquery to display that data onto the index page. For some reason though, I press submit and the data goes to the database, but I have to refresh the page myself to see that data on the page. I'm assuming that the problem has to do with my AJAX JQuery or even some mistake in the index. Also, when I type the text into the text area and press submit, the text remains in the textarea until I refresh the page. Haha, sorry if this is such a noob question.. I'm trying to learn. Thanks so much Here is the AJAX: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { $(".submit").click(function() { var comment = $("#comment").val(); var post_id = $("#post").val(); var dataString = '&comment=' + comment if(comment=='') { alert('Fill something in please!'); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="noworries.jpg" /> '); $.ajax({ type: "POST", url: "commentajax.php", data: dataString, cache: false, success: function(html){ $("ol#update").append(html); $("ol#update li:last").fadeIn("slow"); $("#flash").hide(); } }); }return false; }); }); </script> Here is the index\form area: <body> <div id="container"><img src="banner.jpg" width="890" height="150" alt="title" /></div> <id="update" class="timeline"> <div id="flash"></div> <div id="container"> <form action="#" method="post"> <textarea name="comment" id="comment" cols="35" rows="4"></textarea><br /> <input name="submit" type="submit" class="submit" id="submit" value=" Submit Comment " /><br /> </form> </div> <id="update" class="timeline"> <?php include('config.php'); //$post_id value comes from the POSTS table $prefix="I'm happy when"; $sql=mysql_query("select * from comments order by com_id desc"); while($row=mysql_fetch_array($sql)) { $comment=$row['com_dis']; ?> <!--Displaying comments--> <div id="container"> <class="box"> <?php echo "$prefix $comment"; ?> </div> <?php } ?> Here is my commentajax.php <?php include('config.php'); if($_POST) { $comment=$_POST['comment']; $comment=mysql_real_escape_string($comment); mysql_query("INSERT INTO comments(com_id,com_dis) VALUES ('NULL', '$comment')"); } ?> <li class="box"><br /> <?php echo $comment; ?> </li> I'm sorry for so much code but I just started learning this four days ago and this is probably one of the last bugs until the website is functional.

    Read the article

  • How optimize code with introspection + heavy alloc on iPhone

    - by mamcx
    I have a problem. I try to display a UITable that could have 2000-20000 records (typicall numbers.) I have a SQLite database similar to the Apple contacts application. I do all the tricks I know to get a smoth scroll, but I have a problem. I load the data in 50 recods blocks. Then, when the user scroll, request next 50 until finish the list. However, load that 50 records cause a notable "pause" in loading and scrolling. Everything else works fine. I cache the data, have opaque cells, draw it by code, etc... I swap the code loading the same data in dicts and have a performance boost but wonder if I could keep my object oriented aproach and improve the actual code. This is the code I think have the performance problem: -(NSArray *) loadAndFill: (NSString *)sql theClass: (Class)cls { [self openDb]; NSMutableArray *list = [NSMutableArray array]; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; DbObject *ds; Class myClass = NSClassFromString([DbObject getTableName:cls]); FMResultSet *rs = [self load:sql]; while ([rs next]) { ds = [[myClass alloc] init]; NSDictionary *props = [ds properties]; NSString *fieldType = nil; id fieldValue; for (NSString *fieldName in [props allKeys]) { fieldType = [props objectForKey: fieldName]; fieldValue = [self ValueForField:rs Name:fieldName Type:fieldType]; [ds setValue:fieldValue forKey:fieldName]; } [list addObject :ds]; [ds release]; } [rs close]; [pool drain]; return list; } And I think the main culprit is: -(id) ValueForField: (FMResultSet *)rs Name:(NSString *)fieldName Type:(NSString *)fieldType { id fieldValue = nil; if ([fieldType isEqualToString:@"i"] || // int [fieldType isEqualToString:@"I"] || // unsigned int [fieldType isEqualToString:@"s"] || // short [fieldType isEqualToString:@"S"] || // unsigned short [fieldType isEqualToString:@"f"] || // float [fieldType isEqualToString:@"d"] ) // double { fieldValue = [NSNumber numberWithInt: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"B"]) // bool or _Bool { fieldValue = [NSNumber numberWithBool: [rs boolForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"l"] || // long [fieldType isEqualToString:@"L"] || // usigned long [fieldType isEqualToString:@"q"] || // long long [fieldType isEqualToString:@"Q"] ) // unsigned long long { fieldValue = [NSNumber numberWithLong: [rs longForColumn:fieldName]]; } else if ([fieldType isEqualToString:@"c"] || // char [fieldType isEqualToString:@"C"] ) // unsigned char { fieldValue = [rs stringForColumn:fieldName]; //Is really a boolean? if ([fieldValue isEqualToString:@"0"] || [fieldValue isEqualToString:@"1"]) { fieldValue = [NSNumber numberWithInt: [fieldValue intValue]]; } } else if ([fieldType hasPrefix:@"@"] ) // Object { NSString *className = [fieldType substringWithRange:NSMakeRange(2, [fieldType length]-3)]; if ([className isEqualToString:@"NSString"]) { fieldValue = [rs stringForColumn:fieldName]; } else if ([className isEqualToString:@"NSDate"]) { NSDateFormatter* dateFormatter = [[NSDateFormatter alloc] init]; [dateFormatter setDateFormat:@"yyyy-MM-dd'T'HH:mm:ss"]; NSString *theDate = [rs stringForColumn:fieldName]; if (theDate) { fieldValue = [dateFormatter dateFromString: theDate]; } else { fieldValue = nil; } [dateFormatter release]; } else if ([className isEqualToString:@"NSInteger"]) { fieldValue = [NSNumber numberWithInt: [rs intForColumn :fieldName]]; } else if ([className isEqualToString:@"NSDecimalNumber"]) { fieldValue = [rs stringForColumn :fieldName]; if (fieldValue) { fieldValue = [NSDecimalNumber decimalNumberWithString:[rs stringForColumn :fieldName]]; } } else if ([className isEqualToString:@"NSNumber"]) { fieldValue = [NSNumber numberWithDouble: [rs doubleForColumn:fieldName]]; } else { //Is a relationship one-to-one? if (![fieldType hasPrefix:@"NS"]) { id rel = class_createInstance(NSClassFromString(className), sizeof(unsigned)); Class theClass = [rel class]; if ([rel isKindOfClass:[DbObject class]]) { fieldValue = [rel init]; //Load the record... NSInteger Id = [rs intForColumn:[theClass relationName]]; if (Id>0) { [fieldValue release]; Db *db = [Db currentDb]; fieldValue = [db loadById: theClass theId:Id]; } } } else { NSString *error = [NSString stringWithFormat:@"Err Can't get value for field %@ of type %@", fieldName, fieldType]; NSLog(error); NSException *e = [NSException exceptionWithName:@"DBError" reason:error userInfo:nil]; @throw e; } } } return fieldValue; }

    Read the article

  • Applet Loading Error - Jasper Report

    - by Mihir
    I encountered very silly error , but any way i can not figure out solution.i am loading java applet which encompass a simple jasper viewer in it. when the applet is loaded it throws following exception. SEVERE: Servlet.service() for servlet JasperReportServlet threw exception java.lang.ClassNotFoundException: org.apache.commons.collections.ReferenceMap at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1358) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1204) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at net.sf.jasperreports.extensions.DefaultExtensionsRegistry.<init>(DefaultExtensionsRegistry.java:96) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at net.sf.jasperreports.engine.util.ClassUtils.instantiateClass(ClassUtils.java:59) at net.sf.jasperreports.extensions.ExtensionsEnvironment.createDefaultRegistry(ExtensionsEnvironment.java:80) at net.sf.jasperreports.extensions.ExtensionsEnvironment.<clinit>(ExtensionsEnvironment.java:68) at net.sf.jasperreports.engine.util.JRStyledTextParser.<clinit>(JRStyledTextParser.java:76) at net.sf.jasperreports.engine.fill.JRBaseFiller.<init>(JRBaseFiller.java:182) at net.sf.jasperreports.engine.fill.JRVerticalFiller.<init>(JRVerticalFiller.java:77) at net.sf.jasperreports.engine.fill.JRVerticalFiller.<init>(JRVerticalFiller.java:87) at net.sf.jasperreports.engine.fill.JRVerticalFiller.<init>(JRVerticalFiller.java:57) at net.sf.jasperreports.engine.fill.JRFiller.createFiller(JRFiller.java:142) at net.sf.jasperreports.engine.fill.JRFiller.fillReport(JRFiller.java:78) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:624) at com.dbhl.app.report.generator.JobUpdateGenerator.getJasperPrintObject(JobUpdateGenerator.java:279) at com.dbhl.app.report.JasperReportServlet.processJobUpdate(JasperReportServlet.java:153) at com.dbhl.app.report.JasperReportServlet.getJasperPrintObjectByLedgerType(JasperReportServlet.java:79) at com.dbhl.app.report.JasperReportServlet.service(JasperReportServlet.java:50) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:263) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:844) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:584) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619) below is my applet configuration, i am loading applet using standard java deployment toolkit. <script type="text/javascript" src="<%=basePath%>js/deployJava.js"> </script> <script> var user = '<%=request.getParameter("user")%>'; var attributes = { code : 'applet.EmbeddedViewerApplet.class', archive : '<%=basePath%>resources/appletviewer.jar,<%=basePath%>resources/jasperreports-applet-4.0.0.jar,<%=basePath%>resources/jasperreports-4.0.0.jar,<%=basePath%>resources/commons-collections-3.2.1.jar,<%=basePath%>resources/commons-logging-1.0.4.jar,<%=basePath%>resources/commons-beanutils-1.8.0.jar,<%=basePath%>resources/commons-digester-1.7.jar,<%=basePath%>resources/commons-javaflow-20060411.jar,<%=basePath%>resources/org-netbeans-core.jar', width : "100%", height : 600 }; var parameters = { fontSize : 16, REPORT_URL : '<%=basePath%>servlet/JasperReportServlet?startDate=<%=request.getParameter("startDate")%>&endDate=<%=request.getParameter("endDate")%>&user=' + user + '&reportType=<%=request.getParameter("reportType")%>' }; var version = '1.4'; deployJava.runApplet(attributes, parameters, version); </script> every jar i referred in the applet attributes exist the resource folder of my webroot, which are appletviewer.jar commons-beanutils-1.8.0.jar commons-collections-3.2.1.jar commons-digester-1.7.jar commons-javaflow-20060411.jar commons-logging-1.0.4.jar jasperreports-4.0.0.jar jasperreports-applet-4.0.0.jar org-netbeans-core.jar all the jars are signed today, so no validity expires. i have double check all the things. but it always shows the above error. in iReport i can view the report and it is compiled to jasper object with no error. the java console from control panel http://pastebin.com/Xt6303tT My question is why the classNotFound Exception happens ? i check in the temp cache that the collections file is downloaded succesfully and in the above console log it shows that the jars are successfully downloaded to the host computer. Thank You Mihir Parekh

    Read the article

  • Android using ksoap calling PHP SOAP webservice fails: 'procedure 'CheckLogin' not found

    - by AmazingDreams
    I'm trying to call a PHP SOAP webservice. I know my webservice functions correctly because I use it in a WPF project succesfully. I'm also building an app in android, using the same webservice. The WSDL file can be found here: http://www.wegotcha.nl/servicehandler/service.wsdl This is my code in the android app: String SOAP_ACTION = "http://www.wegotcha.nl/servicehandler/CheckLogin"; String NAMESPACE = "http://www.wegotcha.nl/servicehandler"; String METHOD_NAME = "CheckLogin"; String URL = "http://www.wegotcha.nl/servicehandler/servicehandler.php"; String resultData = ""; SoapSerializationEnvelope soapEnv = new SoapSerializationEnvelope(SoapEnvelope.VER11); SoapObject request = new SoapObject(NAMESPACE, METHOD_NAME); SoapObject UserCredentials = new SoapObject("Types", "UserCredentials6"); UserCredentials.addProperty("mail", params[0]); UserCredentials.addProperty("password", md5(params[1])); request.addSoapObject(UserCredentials); soapEnv.setOutputSoapObject(request); HttpTransportSE http = new HttpTransportSE(URL); http.setXmlVersionTag("<?xml version=\"1.0\" encoding=\"utf-8\"?>"); http.debug = true; try { http.call(SOAP_ACTION, soapEnv); } catch (IOException e) { e.printStackTrace(); } catch (XmlPullParserException e) { e.printStackTrace(); } SoapObject results = null; results = (SoapObject)soapEnv.bodyOut; if(results != null) resultData = results.getProperty(0).toString(); return resultData; Using fiddler I got the following: Android request: <?xml version="1.0" encoding="utf-8"?> <v:Envelope xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:d="http://www.w3.org/2001/XMLSchema" xmlns:c="http://schemas.xmlsoap.org/soap/encoding/" xmlns:v="http://schemas.xmlsoap.org/soap/envelope/"><v:Header /> <v:Body> <n0:CheckLogin id="o0" c:root="1" xmlns:n0="http://www.wegotcha.nl/servicehandler"> <n1:UserCredentials6 i:type="n1:UserCredentials6" xmlns:n1="Types"> <mail i:type="d:string">myemail</mail> <password i:type="d:string">myhashedpass</password> </n1:UserCredentials6> </n0:CheckLogin> </v:Body> </v:Envelope> Getting the following response: Procedure 'CheckLogin' not present My request produced by my WPF app looks completely different: <s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"> <s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <UserCredentials6 xmlns="Types"> <mail>mymail</mail> <password>mypassword</password> </UserCredentials6> </s:Body> </s:Envelope> After googling my ass off I was not able to solve this problem by myself. It may be possible there are some weird things in my Java code because I changed a lot since. I hope you guys will be able to help me, thanks. EDIT: My webservice is of document/literal encoding style, after doing some research I found I should be able to use SoepEnvelope Instead of SoapSerializationEnvelope Though when I replace this I get an error before the try cache block, causing my app to crash. Error: 11-04 16:23:26.786: E/AndroidRuntime(26447): Caused by: java.lang.ClassCastException: org.ksoap2.serialization.SoapObject cannot be cast to org.kxml2.kdom.Node Which is caused by these lines: request.addSoapObject(UserCredentials); soapEnv.setOutputSoapObject(request); This may be a solution though, how do I go about this? I found nothing about using a SoapEnvelope instead of a SoapSerializationEnvelope except for this awesome tutorial: http://ksoap.objectweb.org/project/mailingLists/ksoap/msg00849.html

    Read the article

  • asp.net image aspect ratio help

    - by StealthRT
    Hey all, i am in need of some help with keeping an image aspect ratio in check. This is the aspx code that i have to resize and upload an image the user selects. <%@ Page Trace="False" Language="vb" aspcompat="false" debug="true" validateRequest="false"%> <%@ Import Namespace=System.Drawing %> <%@ Import Namespace=System.Drawing.Imaging %> <%@ Import Namespace=System %> <%@ Import Namespace=System.Web %> <SCRIPT LANGUAGE="VBScript" runat="server"> const Lx = 500 ' max width for thumbnails const Ly = 60 ' max height for thumbnails const upload_dir = "/uptest/" ' directory to upload file const upload_original = "sample" ' filename to save original as (suffix added by script) const upload_thumb = "thumb" ' filename to save thumbnail as (suffix added by script) const upload_max_size = 512 ' max size of the upload (KB) note: this doesn't override any server upload limits dim fileExt ' used to store the file extension (saves finding it mulitple times) dim newWidth, newHeight as integer ' new width/height for the thumbnail dim l2 ' temp variable used when calculating new size dim fileFld as HTTPPostedFile ' used to grab the file upload from the form Dim originalimg As System.Drawing.Image ' used to hold the original image dim msg ' display results dim upload_ok as boolean ' did the upload work ? </script> <% randomize() ' used to help the cache-busting on the preview images upload_ok = false if lcase(Request.ServerVariables("REQUEST_METHOD"))="post" then fileFld = request.files(0) ' get the first file uploaded from the form (note:- you can use this to itterate through more than one image) if fileFld.ContentLength > upload_max_size * 1024 then msg = "Sorry, the image must be less than " & upload_max_size & "Kb" else try originalImg = System.Drawing.Image.FromStream(fileFld.InputStream) ' work out the width/height for the thumbnail. Preserve aspect ratio and honour max width/height ' Note: if the original is smaller than the thumbnail size it will be scaled up If (originalImg.Width/Lx) > (originalImg.Width/Ly) Then L2 = originalImg.Width newWidth = Lx newHeight = originalImg.Height * (Lx / L2) if newHeight > Ly then newWidth = newWidth * (Ly / newHeight) newHeight = Ly end if Else L2 = originalImg.Height newHeight = Ly newWidth = originalImg.Width * (Ly / L2) if newWidth > Lx then newHeight = newHeight * (Lx / newWidth) newWidth = Lx end if End If Dim thumb As New Bitmap(newWidth, newHeight) 'Create a graphics object Dim gr_dest As Graphics = Graphics.FromImage(thumb) ' just in case it's a transparent GIF force the bg to white dim sb = new SolidBrush(System.Drawing.Color.White) gr_dest.FillRectangle(sb, 0, 0, thumb.Width, thumb.Height) 'Re-draw the image to the specified height and width gr_dest.DrawImage(originalImg, 0, 0, thumb.Width, thumb.Height) try fileExt = System.IO.Path.GetExtension(fileFld.FileName).ToLower() originalImg.save(Server.MapPath(upload_dir & upload_original & fileExt), originalImg.rawformat) thumb.save(Server.MapPath(upload_dir & upload_thumb & fileExt), originalImg.rawformat) msg = "Uploaded " & fileFld.FileName & " to " & Server.MapPath(upload_dir & upload_original & fileExt) upload_ok = true catch msg = "Sorry, there was a problem saving the image." end try ' Housekeeping for the generated thumbnail if not thumb is nothing then thumb.Dispose() thumb = nothing end if catch msg = "Sorry, that was not an image we could process." end try end if ' House Keeping ! if not originalImg is nothing then originalImg.Dispose() originalImg = nothing end if end if %> What i am looking for is a way to just have it go by the height of what i set it: const Ly = 60 ' max height for thumbnails And have the code for the width just be whatever. So if i had an image... say 600 x 120 (w h) and i used photoshop to change just the height, it would keep it in ratio and have it 300 x 60 (w x h). Thats what i am looking to do with this code here. However, i can not think of a way to do this (or to just leave a wildcard for the width setting. Any help would be great :o) David

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • Python: How to read huge text file into memory

    - by asmaier
    I'm using Python 2.6 on a Mac Mini with 1GB RAM. I want to read in a huge text file $ ls -l links.csv; file links.csv; tail links.csv -rw-r--r-- 1 user user 469904280 30 Nov 22:42 links.csv links.csv: ASCII text, with CRLF line terminators 4757187,59883 4757187,99822 4757187,66546 4757187,638452 4757187,4627959 4757187,312826 4757187,6143 4757187,6141 4757187,3081726 4757187,58197 So each line in the file consists of a tuple of two comma separated integer values. I want to read in the whole file and sort it according to the second column. I know, that I could do the sorting without reading the whole file into memory. But I thought for a file of 500MB I should still be able to do it in memory since I have 1GB available. However when I try to read in the file, Python seems to allocate a lot more memory than is needed by the file on disk. So even with 1GB of RAM I'm not able to read in the 500MB file into memory. My Python code for reading the file and printing some information about the memory consumption is: #!/usr/bin/python # -*- coding: utf-8 -*- import sys infile=open("links.csv", "r") edges=[] count=0 #count the total number of lines in the file for line in infile: count=count+1 total=count print "Total number of lines: ",total infile.seek(0) count=0 for line in infile: edge=tuple(map(int,line.strip().split(","))) edges.append(edge) count=count+1 # for every million lines print memory consumption if count%1000000==0: print "Position: ", edge print "Read ",float(count)/float(total)*100,"%." mem=sys.getsizeof(edges) for edge in edges: mem=mem+sys.getsizeof(edge) for node in edge: mem=mem+sys.getsizeof(node) print "Memory (Bytes): ", mem The output I got was: Total number of lines: 30609720 Position: (9745, 2994) Read 3.26693612356 %. Memory (Bytes): 64348736 Position: (38857, 103574) Read 6.53387224712 %. Memory (Bytes): 128816320 Position: (83609, 63498) Read 9.80080837067 %. Memory (Bytes): 192553000 Position: (139692, 1078610) Read 13.0677444942 %. Memory (Bytes): 257873392 Position: (205067, 153705) Read 16.3346806178 %. Memory (Bytes): 320107588 Position: (283371, 253064) Read 19.6016167413 %. Memory (Bytes): 385448716 Position: (354601, 377328) Read 22.8685528649 %. Memory (Bytes): 448629828 Position: (441109, 3024112) Read 26.1354889885 %. Memory (Bytes): 512208580 Already after reading only 25% of the 500MB file, Python consumes 500MB. So it seem that storing the content of the file as a list of tuples of ints is not very memory efficient. Is there a better way to do it, so that I can read in my 500MB file into my 1GB of memory?

    Read the article

  • Lots of mysql Sleep processes

    - by user259284
    Hello, I am still having trouble with my mysql server. It seems that since i optimize it, the tables were growing and now sometimes is very slow again. I have no idea of how to optimize more. mySQL server has 48GB of RAM and mysqld is using about 8, most of the tables are innoDB. Site has about 2000 users online. I also run explain on every query and every one of them is indexed. mySQL processes: http://www.pik.ba/mysqlStanje.php my.cnf: # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 10.100.27.30 # # * Fine Tuning # key_buffer = 64M key_buffer_size = 512M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 1000 table_cache = 1000 join_buffer_size = 2M tmp_table_size = 2G max_heap_table_size = 2G innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 128M innodb_log_file_size = 100M log-slow-queries = /var/log/mysql/slow.log sort_buffer_size = 5M net_buffer_length = 5M read_buffer_size = 2M read_rnd_buffer_size = 12M thread_concurrency = 10 ft_min_word_len = 3 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 512M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. #log = /var/log/mysql/mysql.log # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * BerkeleyDB # # Using BerkeleyDB is now discouraged as its support will cease in 5.1.12. skip-bdb # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # You might want to disable InnoDB to shrink the mysqld process by circa 100MB. #skip-innodb # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • dojo.require() prevents Firefox from rendering the page

    - by Eduard Wirch
    Im experiencing strange behavior with Firefox and Dojo. I have a html page with these lines in the <head> section: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script> ... Sometimes the page loads normally. But sometimes it won't. Firefox will fetch the whole html page but not render it. I see only a gray window. After some experiments I figured out that the rendering problem has something to do with the load time of the html. Firefox starts evaluating the html page while loading it. If the page takes too long to load the above javascript will be executed BEFORE the html finishes loading. If this happens I'll get the gray window. Advising Firefox to show me the source code of the page will display the correct complete html code. BUT: if I save the page to disk (File-Save Page As...) the html code will be truncated and the above part will look like this: ... <script type="text/javascript" src="dojo.js" djconfig="parseOnLoad: true, locale: 'de'"></script> <script type="text/javascript"> dojo.require("dojo.number"); </script></head><body></body></html> This explains why I get to see a gray area. But why does this code appear there? I assume the require() method of Dojo does something "evil". But I can't figure out what. There is no write.document("</head><body></body></html>"); in the Dojo code. I checked for it. The problem would be fixed, if I'd place the dojo.require("dojo.number"); statement in the window.load event: <script type="text/javascript"> window.load=function() { dojo.require("dojo.number"); } </script> But I'm curious why this happens. Is there a Javasctript function which forces Firefox to stop evaluating the page? Does Dojo do somethig "bad"? Can anyone explain this behavior to me? EDIT: Dojo 1.3.1, no JS errors or warnings.

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • Bind9 as a caching resolver fails with mismatch ID on localhost but not external IP

    - by argibbs
    I'm running Ubuntu 12.04 LTS on a machine on my private network. I have bind9 installed (v9.8.1-P1) via aptitude, so it appears to have put all the bits in the right places and the service starts automatically. I plan on adding some zones later, but first I'm just trying to get it working as a caching resolver. I installed bind, configured it, and starting using it. Initially I thought it was working ok, but then I found some sites weren't being resolved. I've pinned it down to being linked to the size of the result and bind failing-over to TCP mode. So: I'm trying to find out why bind is failing when I query for domain info and the result is 512 bytes (causing a truncation and retry on TCP). Specifically it fails with ID mismatches if I point dig at localhost, but works when I query the machine's own IP (192.168.0.2). This appears to be backwards to the problem that most people have when using bind (fails on external ip, works on localhost). If I do dig @localhost google.com (which has a response of <512 bytes) then it works; I get no warnings, and plenty of output. $ dig @localhost google.com ; <<>> DiG 9.8.1-P1 <<>> @localhost google.com [snip lots of output] ;; Query time: 39 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:08:34 2013 ;; MSG SIZE rcvd: 495 If I do dig @localhost play.google.com (which has a larger response) then I get back something like: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ;; ERROR: ID mismatch: expected ID 3696, got 27130 This seems to be standard, documented behaviour - when the UDP response is large (here 'large' == 512 bytes) it falls back to TCP. The ID mismatch is not expected though. If I do dig @192.168.0.2 play.google.com then I still get the warning about using TCP mode, but it otherwise works $ dig @192.168.0.2 play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @192.168.0.2 play.google.com [snip most of the output] ;; Query time: 5 msec ;; SERVER: 192.168.0.2#53(192.168.0.2) ;; WHEN: Thu Oct 17 23:05:55 2013 ;; MSG SIZE rcvd: 521 At the moment I've not set up any zones in my local instance, so it's just acting as a caching resolver. My options config is pretty much unchanged from standard, I've got the following set: options { directory "/var/cache/bind"; allow-query { 192.168/16; 127.0.0.1; }; forwarders { 8.8.8.8; 8.8.4.4; }; dnssec-validation auto; edns-udp-size 4096 ; allow-transfer { any; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; And my /etc/resolv.conf is just nameserver 127.0.0.1 search .local The problem definitely seems linked to the failover to TCP mode: if I do dig +bufsize=4096 @localhost play.google.com then it works; no warning about failover to TCP, no ID mismatch, and a standard looking result. To be honest, if there was a way to force bind to use a much larger UDP buffer, that'd probably be good enough for me, but all I've been able to find mention of is max-udp-size 4096 and that doesn't change the behaviour in any way. I've also tried setting edns-udp-size 512 in case the problem is some weird EDNS issue with my router (which seems unlikely since the +bufsize=4096 flag works fine). I've also tried dig +trace @localhost play.google.com; this works. No truncation/TCP warning, and a full result. I've also tried changing the servers used in the forwarder (e.g. to OpenDNS), but that makes no difference. There's one last data point: if I repetitively do dig @localhost play.google.com I don't always get an ID mismatch, but sometimes a REFUSED error. I'm much more likely to get a REFUSED error if I dig the non-localhost IP (192.168.0.2) first: $ dig @localhost play.google.com ;; Truncated, retrying in TCP mode. ; <<>> DiG 9.8.1-P1 <<>> @localhost play.google.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 35104 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;play.google.com. IN A ;; Query time: 4 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Thu Oct 17 23:20:13 2013 ;; MSG SIZE rcvd: 33 Any insights or things to try would be much appreciated.

    Read the article

  • Mysql performance problem & Failed DIMM

    - by murdoch
    Hi I have a dedicated mysql database server which has been having some performance problems recently, under normal load the server will be running fine, then suddenly out of the blue the performance will fall off a cliff. The server isn't using the swap file and there is 12GB of RAM in the server, more than enough for its needs. After contacting my hosting comapnies support they have discovered that there is a failed 2GB DIMM in the server and have scheduled to replace it tomorow morning. My question is could a failed DIMM result in the performance problems I am seeing or is this just coincidence? My worry is that they will replace the ram tomorrow but the problems will persist and I will still be lost of explanations so I am just trying to think ahead. The reason I ask is that there is plenty of RAM in the server, more than required and simply missing 2GB should be a problem, so if this failed DIMM is causing these performance problems then the OS must be trying to access the failed DIMM and slowing down as a result. Does that sound like a credible explanation? This is what DELLs omreport program says about the RAM, notice one dimm is "Critical" Memory Information Health : Critical Memory Operating Mode Fail Over State : Inactive Memory Operating Mode Configuration : Optimizer Attributes of Memory Array(s) Attributes : Location Memory Array 1 : System Board or Motherboard Attributes : Use Memory Array 1 : System Memory Attributes : Installed Capacity Memory Array 1 : 12288 MB Attributes : Maximum Capacity Memory Array 1 : 196608 MB Attributes : Slots Available Memory Array 1 : 18 Attributes : Slots Used Memory Array 1 : 6 Attributes : ECC Type Memory Array 1 : Multibit ECC Total of Memory Array(s) Attributes : Total Installed Capacity Value : 12288 MB Attributes : Total Installed Capacity Available to the OS Value : 12004 MB Attributes : Total Maximum Capacity Value : 196608 MB Details of Memory Array 1 Index : 0 Status : Ok Connector Name : DIMM_A1 Type : DDR3-Registered Size : 2048 MB Index : 1 Status : Ok Connector Name : DIMM_A2 Type : DDR3-Registered Size : 2048 MB Index : 2 Status : Ok Connector Name : DIMM_A3 Type : DDR3-Registered Size : 2048 MB Index : 3 Status : Critical Connector Name : DIMM_B1 Type : DDR3-Registered Size : 2048 MB Index : 4 Status : Ok Connector Name : DIMM_B2 Type : DDR3-Registered Size : 2048 MB Index : 5 Status : Ok Connector Name : DIMM_B3 Type : DDR3-Registered Size : 2048 MB the command free -m shows this, the server seems to be using more than 10GB of ram which would suggest it is trying to use the DIMM total used free shared buffers cached Mem: 12004 10766 1238 0 384 4809 -/+ buffers/cache: 5572 6432 Swap: 2047 0 2047 iostat output while problem is occuring avg-cpu: %user %nice %system %iowait %steal %idle 52.82 0.00 11.01 0.00 0.00 36.17 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 47.00 0.00 576.00 0 576 sda1 0.00 0.00 0.00 0 0 sda2 1.00 0.00 32.00 0 32 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 46.00 0.00 544.00 0 544 avg-cpu: %user %nice %system %iowait %steal %idle 53.12 0.00 7.81 0.00 0.00 39.06 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 592.00 0 592 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 592.00 0 592 avg-cpu: %user %nice %system %iowait %steal %idle 56.09 0.00 7.43 0.37 0.00 36.10 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 232.00 0.00 64520.00 0 64520 sda1 0.00 0.00 0.00 0 0 sda2 159.00 0.00 63728.00 0 63728 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 73.00 0.00 792.00 0 792 avg-cpu: %user %nice %system %iowait %steal %idle 52.18 0.00 9.24 0.06 0.00 38.51 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 49.00 0.00 600.00 0 600 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 49.00 0.00 600.00 0 600 avg-cpu: %user %nice %system %iowait %steal %idle 54.82 0.00 8.64 0.00 0.00 36.55 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 100.00 0.00 2168.00 0 2168 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 100.00 0.00 2168.00 0 2168 avg-cpu: %user %nice %system %iowait %steal %idle 54.78 0.00 6.75 0.00 0.00 38.48 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 84.00 0.00 896.00 0 896 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 84.00 0.00 896.00 0 896 avg-cpu: %user %nice %system %iowait %steal %idle 54.34 0.00 7.31 0.00 0.00 38.35 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 81.00 0.00 840.00 0 840 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 81.00 0.00 840.00 0 840 avg-cpu: %user %nice %system %iowait %steal %idle 55.18 0.00 5.81 0.44 0.00 38.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 317.00 0.00 105632.00 0 105632 sda1 0.00 0.00 0.00 0 0 sda2 224.00 0.00 104672.00 0 104672 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 93.00 0.00 960.00 0 960 avg-cpu: %user %nice %system %iowait %steal %idle 55.38 0.00 7.63 0.00 0.00 36.98 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 74.00 0.00 800.00 0 800 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 74.00 0.00 800.00 0 800 avg-cpu: %user %nice %system %iowait %steal %idle 56.43 0.00 7.80 0.00 0.00 35.77 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 72.00 0.00 784.00 0 784 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 72.00 0.00 784.00 0 784 avg-cpu: %user %nice %system %iowait %steal %idle 54.87 0.00 6.49 0.00 0.00 38.64 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 80.20 0.00 855.45 0 864 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 80.20 0.00 855.45 0 864 avg-cpu: %user %nice %system %iowait %steal %idle 57.22 0.00 5.69 0.00 0.00 37.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 33.00 0.00 432.00 0 432 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 33.00 0.00 432.00 0 432 avg-cpu: %user %nice %system %iowait %steal %idle 56.03 0.00 7.93 0.00 0.00 36.04 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 41.00 0.00 560.00 0 560 sda1 0.00 0.00 0.00 0 0 sda2 2.00 0.00 88.00 0 88 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 39.00 0.00 472.00 0 472 avg-cpu: %user %nice %system %iowait %steal %idle 55.78 0.00 5.13 0.00 0.00 39.09 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 29.00 0.00 392.00 0 392 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 29.00 0.00 392.00 0 392 avg-cpu: %user %nice %system %iowait %steal %idle 53.68 0.00 8.30 0.06 0.00 37.95 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 78.00 0.00 4280.00 0 4280 sda1 0.00 0.00 0.00 0 0 sda2 0.00 0.00 0.00 0 0 sda3 0.00 0.00 0.00 0 0 sda4 0.00 0.00 0.00 0 0 sda5 78.00 0.00 4280.00 0 4280

    Read the article

  • I'm trying to run some PHP scripts as CLI instead of over HTTP. How do I make them play nice?

    - by gnfti
    Hi everyone. I'm using some PHP scripts from FeedForAll to join together RSS feeds (RSSmesh) and display them as HTML (RSS2HTML). Because I intend to run these scripts fairly intensively and don't want the resulting HTTP requests and bandwidth to count towards my hosting quota, I am in the process of moving to running them on the web host's server in an umbrella PHP "batch" script, and call this script via cron (this is a Linux server, by the way). Here's a (working) sample request over HTTP: http://www.mydomain.com/a/rss2htmlcore/rss2html2.php?XMLFILE=http://www.mydomain.com/a/myapp/xmlcache/feed.xml&TEMPLATE=template.html This will produce the desired HTML output. An example of how I want this to work on the command line: /srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html This is with the correct shebang line added to "rss2html2-cli.php". I could just as well specify the executable ("/usr/local/bin/php") in the request, I doubt it makes a difference because I am able to run another script (that I wrote myself) either way without problems. Now, RSS2HTML and RSSmesh are different in that, for starters, they include secondary files -- for example, both include an XML parser script -- and I suspect that this is where I am getting a bit in over my head. Right now I'm calling exec() from the "umbrella" batch script, like so: exec("/srv/customers/mycustomer#/mydomain.com/www/a/rss2htmlcore/rss2html2-cli.php /srv/customers/mycustomer#/mydomain.com/www/a/myapp/xmlcache/feed.xml /srv/customers/mycustomer#/mydomain.com/www/a/template.html", $output) But no output is being produced. What's the best way to go about this and what "gotchas" should I keep in mind? Is exec() the right way to approach this? It works fine for the other (simple) script but that writes its own output. For this I want to get the output and write it to a file from within the umbrella script if possible. I've also tried output buffering but to no avail. Do I need to pay attention to anything specific with regard to the includes? Right now they're specified in the scripts as include_once("FeedForAll_XMLParser.inc.php"); and the specified files are indeed in the same folder. Further info: -This is a Linux server. -I have no direct access to the shell, so I can't test things directly on a command line, everything is via crontab. -I will admit that support for the FeedForAll scripts leaves a lot to be desired, but I'd like to keep using their scripts if at all possible, if only because I know them and have been using them for a while. I have looked into Simplepie, but the FFA scripts do some things that I've seen no obvious solutions for with Simplepie, like limiting the number of items per individual feed (RSSmesh) or limiting the description length (RSS2HTML). -Yahoo! Pipes is out, they cache their data for too long for my application. Should you want to take a look at the code, here are the scripts as txt files. RSS2HTML2 and RSSmesh are the FeedForAll scripts, FeedForAll_XMLParser... is the included parser. Note that I have not yet amended these to handle $argv etc. I have however in "scraper-universal-rss-cli", which works fine with CLI. If anyone has any thoughts to share on this it would be very much appreciated. Thank you in advance.

    Read the article

  • The weirdest css issue I have ever seen

    - by jasondavis
    Below I have 2 css codes for a div, please note that the first one does not even have a div on the page or ANYTHING with it's name on it. I can also rename it to anything. Now where the weird part comes in. The second bit of code below has a width of 520px, the only way that the div on the page will be 520px is if I leave the css code thats above that one, the 1st one with no existing div on the page HAS to be on the page for the second css code to work, At first I thought it has to be a browser caching issue, so I clear my cache and that does nothing, I then try 2 other browsers and they all have the same result. I add the 1st bit of code into the page and the second bit works, I take the first bit away and the second bit does not work. AM i overlooking something here? .commentwrappsdfsde2{width:950px;margin:0 0;padding:0;} .commentwrapper{width:520px;margin-right:auto;margin-left:auto;} Here is the whole page code <style> <!-- css for user photos--> div.imageSub img.female { border-top: 1px solid #FF3399; } div.imageSub img.male { border-top: 1px solid #3399FF; } div.imageSub img { z-index: 1; margin: 0; display: block; } div.imageSub div { position: relative; margin: -15px 0 0; padding: 5px; height: 5px; line-height: 4px; text-align: center; overflow: hidden; font-family:Trebuchet MS,Helvetica,sans-serif; font-size:12px; font-weight: bold; } div.imageSub div.blackbg { z-index: 2; background-color: #000; -ms-filter: "progid:DXImageTransform.Microsoft.Alpha(Opacity=70)"; filter: alpha(opacity=70); opacity: 0.5; } div.imageSub div.label { z-index: 3; color: white; } <!-- end photo block--> /* Comments */ .commentwrappsdfsde2{width:950px;margin:0 0;padding:0;} .commentwrapper{width:520px;margin-right:auto;margin-left:auto;} #comments ol.commentlist li { list-style-type:none; padding:20px; background:none; } #comments ol.commentlist li.thread-even { background:#f6f6f6; border-top:1px solid #e3e3e3; border-bottom:1px solid #e3e3e3; } #comments ul.children li ul.children,#comments .commentlist{padding:0;} </style> <div class="commentwrapper"> <div id="comments"> <ol class="commentlist"> <li class="comment thread-even " > Comment 1 </li> </ol> </div> </div>

    Read the article

  • jquery $.getJSON only works once in internet explorer Help Please!!!

    - by JasperS
    I have a php function which inserts a searchbar into each page on a website. The site checks to see if the user has javascript enabled and if they do it inserts some jquery ajax stuff to link select boxes (instead of using it's fallback onchange="form.submit()"). $.getJSON works perfectly for me in other browsers except in IE, if I do a full page refresh (ctrl+F5) in IE my ajax works flawlessly until I navigate to a new page (or the same page with $PHP_SELF) either by submiting the form or clicking a link the jquery onchange function fires but then jquery throws an error: Webpage error details Message: Object doesn't support this property or method Line: 123 Char: 183 Code: 0 URI: http://~#UNABLE~TO~DISCLOSE#~/jquery-1.4.2.min.js It seems like jquery function $.getJSON() is gone??? This seems to be some kind of caching issue as it happens on the second page load but I think i've go all the caching prevention in place anyways, here's a snipet of the code that ads the jquery functions: if (isset($_SESSION['NO_SCRIPT']) == true && $_SESSION['NO_SCRIPT'] == false) { $html .= '<script type="text/javascript" charset="utf-8">'; $html .= '$.ajaxSetup({ cache: false });'; $html .= '$.ajaxSetup({"error":function(XMLHttpRequest,textStatus, errorThrown) { alert(textStatus); alert(errorThrown); alert(XMLHttpRequest.responseText); }});'; $html .= '</script>'; $html .= '<script type="text/javascript" charset="utf-8">'; $html .= '$(function(){ $("select#searchtype").change(function() { '; $html .= 'alert("change fired!"); '; $html .= '$.getJSON("ajaxgetcategories.php", {id: $(this).val()}, function(j) { '; $html .= 'alert("ajax returned!"); '; $html .= 'var options = \'\'; '; $html .= 'options += \'<option value="0" >--\' + j[0].all + \'--</option>\'; '; $html .= 'for (var i = 0; i < j.length; i++) { options += \'<option value="\' + j[i].id + \'">\' + j[i].name + \'</option>\'; } '; $html .= '$("select#searchcategory").html(options); }) }) }) '; $html .= '</script> '; $html .= '<script type="text/javascript" charset="utf-8"> '; $html .= '$(function(){ $("select#searchregion").change(function() { '; $html .= 'alert("change fired!"); '; $html .= '$.getJSON("ajaxgetcountries.php", {id: $(this).val()}, function(j) { '; $html .= 'alert("ajax returned!"); '; $html .= 'var options = \'\'; '; $html .= 'options += \'<option value="0" >--\' + j[0].all + \'--</option>\'; '; $html .= 'for (var i = 0; i < j.length; i++) { options += \'<option value="\' + j[i].id + \'">\' + j[i].name + \'</option>\'; } '; $html .= '$("select#searchcountry").html(options); }) }) }) '; $html .= '</script> '; }; return $html; remember, this is part of a php funtion that inserts a script into the html and sorry if it looks a bit messy, I'm new to PHP and Javascript and I'm a bit untidy too :) Please also remember that this works perfectly in IE on the first visit but after any navigation I get the error. Thanks guys

    Read the article

  • Overflow exception while performing parallel factorization using the .NET Task Parallel Library (TPL

    - by Aviad P.
    Hello, I'm trying to write a not so smart factorization program and trying to do it in parallel using TPL. However, after about 15 minutes of running on a core 2 duo machine, I am getting an aggregate exception with an overflow exception inside it. All the entries in the stack trace are part of the .NET framework, the overflow does not come from my code. Any help would be appreciated in figuring out why this happens. Here's the commented code, hopefully it's simple enough to understand: class Program { static List<Tuple<BigInteger, int>> factors = new List<Tuple<BigInteger, int>>(); static void Main(string[] args) { BigInteger theNumber = BigInteger.Parse( "653872562986528347561038675107510176501827650178351386656875178" + "568165317809518359617865178659815012571026531984659218451608845" + "719856107834513527"); Stopwatch sw = new Stopwatch(); bool isComposite = false; sw.Start(); do { /* Print out the number we are currently working on. */ Console.WriteLine(theNumber); /* Find a factor, stop when at least one is found (using the Any operator). */ isComposite = Range(theNumber) .AsParallel() .Any(x => CheckAndStoreFactor(theNumber, x)); /* Of the factors found, take the one with the lowest base. */ var factor = factors.OrderBy(x => x.Item1).First(); Console.WriteLine(factor); /* Divide the number by the factor. */ theNumber = BigInteger.Divide( theNumber, BigInteger.Pow(factor.Item1, factor.Item2)); /* Clear the discovered factors cache, and keep looking. */ factors.Clear(); } while (isComposite); sw.Stop(); Console.WriteLine(isComposite + " " + sw.Elapsed); } static IEnumerable<BigInteger> Range(BigInteger squareOfTarget) { BigInteger two = BigInteger.Parse("2"); BigInteger element = BigInteger.Parse("3"); while (element * element < squareOfTarget) { yield return element; element = BigInteger.Add(element, two); } } static bool CheckAndStoreFactor(BigInteger candidate, BigInteger factor) { BigInteger remainder, dividend = candidate; int exponent = 0; do { dividend = BigInteger.DivRem(dividend, factor, out remainder); if (remainder.IsZero) { exponent++; } } while (remainder.IsZero); if (exponent > 0) { lock (factors) { factors.Add(Tuple.Create(factor, exponent)); } } return exponent > 0; } } Here's the exception thrown: Unhandled Exception: System.AggregateException: One or more errors occurred. --- > System.OverflowException: Arithmetic operation resulted in an overflow. at System.Linq.Parallel.PartitionedDataSource`1.ContiguousChunkLazyEnumerator.MoveNext(T& currentElement, Int32& currentKey) at System.Linq.Parallel.AnyAllSearchOperator`1.AnyAllSearchOperatorEnumerator`1.MoveNext(Boolean& currentElement, Int32& currentKey) at System.Linq.Parallel.StopAndGoSpoolingTask`2.SpoolingWork() at System.Linq.Parallel.SpoolingTaskBase.Work() at System.Linq.Parallel.QueryTask.BaseWork(Object unused) at System.Linq.Parallel.QueryTask.<.cctor>b__0(Object o) at System.Threading.Tasks.Task.InnerInvoke() at System.Threading.Tasks.Task.Execute() --- End of inner exception stack trace --- at System.Linq.Parallel.QueryTaskGroupState.QueryEnd(Boolean userInitiatedDispose) at System.Linq.Parallel.SpoolingTask.SpoolStopAndGo[TInputOutput,TIgnoreKey](QueryTaskGroupState groupState, PartitionedStream`2 partitions, SynchronousChannel`1[] channels, TaskScheduler taskScheduler) at System.Linq.Parallel.DefaultMergeHelper`2.System.Linq.Parallel.IMergeHelper<TInputOutput>.Execute() at System.Linq.Parallel.MergeExecutor`1.Execute[TKey](PartitionedStream`2 partitions, Boolean ignoreOutput, ParallelMergeOptions options, TaskScheduler taskScheduler, Boolean isOrdered, CancellationState cancellationState, Int32 queryId) at System.Linq.Parallel.PartitionedStreamMerger`1.Receive[TKey](PartitionedStream`2 partitionedStream) at System.Linq.Parallel.AnyAllSearchOperator`1.WrapPartitionedStream[TKey](PartitionedStream`2 inputStream, IPartitionedStreamRecipient`1 recipient, BooleanpreferStriping, QuerySettings settings) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.ChildResultsRecipient.Receive[TKey](PartitionedStream`2 inputStream) at System.Linq.Parallel.ScanQueryOperator`1.ScanEnumerableQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.UnaryQueryOperator`2.UnaryQueryOperatorResults.GivePartitionedStream(IPartitionedStreamRecipient`1 recipient) at System.Linq.Parallel.QueryOperator`1.GetOpenedEnumerator(Nullable`1 mergeOptions, Boolean suppressOrder, Boolean forEffect, QuerySettings querySettings) at System.Linq.Parallel.QueryOpeningEnumerator`1.OpenQuery() at System.Linq.Parallel.QueryOpeningEnumerator`1.MoveNext() at System.Linq.Parallel.AnyAllSearchOperator`1.Aggregate() at System.Linq.ParallelEnumerable.Any[TSource](ParallelQuery`1 source, Func`2 predicate) at PFact.Program.Main(String[] args) in d:\myprojects\PFact\PFact\Program.cs:line 34 Any help would be appreciated. Thanks!

    Read the article

  • why i7-860 is two times slower than E3-1220?

    - by javapowered
    I have financical trading software. It decodes fast/fix messages. I'm running same binaries on two different machines on very similar set of data. Software receive "messages" and decodes them. The general rule - longer message takes more time to decode: i7-860, Windows 7: Debug 18:23:48.8047325 count=51 decoding take microseconds = 300 Debug 18:23:49.7287854 count=53 decoding take microseconds = 349 Debug 18:23:49.7397860 count=110 decoding take microseconds = 516 Debug 18:23:49.7497866 count=92 decoding take microseconds = 512 Debug 18:23:49.7597872 count=49 decoding take microseconds = 267 Debug 18:23:49.7717878 count=194 decoding take microseconds = 823 Debug 18:23:49.7797883 count=49 decoding take microseconds = 296 Debug 18:23:49.7997894 count=50 decoding take microseconds = 299 Debug 18:23:50.7328428 count=101 decoding take microseconds = 583 Debug 18:23:50.7418433 count=42 decoding take microseconds = 281 Debug 18:23:50.7538440 count=151 decoding take microseconds = 764 Debug 18:23:50.7618445 count=57 decoding take microseconds = 279 Debug 18:23:50.7738452 count=122 decoding take microseconds = 712 Debug 18:23:50.8028468 count=52 decoding take microseconds = 281 Debug 18:23:51.7389004 count=137 decoding take microseconds = 696 Debug 18:23:51.7499010 count=100 decoding take microseconds = 485 Debug 18:23:51.7689021 count=185 decoding take microseconds = 872 Debug 18:23:51.8079043 count=49 decoding take microseconds = 315 Debug 18:23:52.7349573 count=90 decoding take microseconds = 532 Debug 18:23:52.7439578 count=53 decoding take microseconds = 277 Debug 18:23:52.7539584 count=134 decoding take microseconds = 623 Debug 18:23:52.7629589 count=47 decoding take microseconds = 294 Debug 18:23:52.7749596 count=198 decoding take microseconds = 868 Debug 18:23:52.8039613 count=52 decoding take microseconds = 291 Debug 18:23:53.7400148 count=132 decoding take microseconds = 666 Debug 18:23:53.7480153 count=81 decoding take microseconds = 430 Debug 18:23:53.7570158 count=49 decoding take microseconds = 301 Debug 18:23:53.7710166 count=156 decoding take microseconds = 752 Debug 18:23:53.7770169 count=45 decoding take microseconds = 270 Debug 18:23:54.7350717 count=108 decoding take microseconds = 578 Debug 18:23:54.7430722 count=52 decoding take microseconds = 286 Debug 18:23:54.7540728 count=138 decoding take microseconds = 567 Debug 18:23:54.7760741 count=160 decoding take microseconds = 753 Debug 18:23:54.8030756 count=53 decoding take microseconds = 292 Debug 18:23:55.7411293 count=110 decoding take microseconds = 629 Debug 18:23:55.7481297 count=48 decoding take microseconds = 294 Debug 18:23:55.7591303 count=84 decoding take microseconds = 386 Debug 18:23:55.7701309 count=90 decoding take microseconds = 484 Debug 18:23:55.7801315 count=120 decoding take microseconds = 527 Debug 18:23:55.8101332 count=53 decoding take microseconds = 290 Debug 18:23:56.7341861 count=121 decoding take microseconds = 667 Debug 18:23:56.7421865 count=53 decoding take microseconds = 293 Debug 18:23:56.7531872 count=127 decoding take microseconds = 586 Debug 18:23:56.7621877 count=58 decoding take microseconds = 306 Debug 18:23:56.7751884 count=138 decoding take microseconds = 649 Debug 18:23:56.8021900 count=53 decoding take microseconds = 288 Debug 18:23:57.7392436 count=139 decoding take microseconds = 699 Debug 18:23:57.7502442 count=121 decoding take microseconds = 548 Debug 18:23:57.7582446 count=61 decoding take microseconds = 301 Debug 18:23:57.7692453 count=98 decoding take microseconds = 500 Debug 18:23:57.7792458 count=94 decoding take microseconds = 460 Debug 18:23:57.8092476 count=41 decoding take microseconds = 274 Xeon E3-1220, Windows Server 2008 R2 foundation: Debug 18:28:57.5087967 count=117 decoding take microseconds = 255 Debug 18:28:57.5087967 count=85 decoding take microseconds = 187 Debug 18:28:57.5087967 count=55 decoding take microseconds = 155 Debug 18:28:57.5243967 count=86 decoding take microseconds = 189 Debug 18:28:57.5243967 count=53 decoding take microseconds = 139 Debug 18:28:57.5243967 count=52 decoding take microseconds = 153 Debug 18:28:57.5243967 count=55 decoding take microseconds = 146 Debug 18:28:57.5243967 count=103 decoding take microseconds = 239 Debug 18:28:57.5243967 count=83 decoding take microseconds = 182 Debug 18:28:57.5243967 count=85 decoding take microseconds = 180 Debug 18:28:57.5243967 count=80 decoding take microseconds = 202 Debug 18:28:57.5243967 count=58 decoding take microseconds = 135 Debug 18:28:57.5243967 count=55 decoding take microseconds = 140 Debug 18:28:57.5243967 count=81 decoding take microseconds = 183 Debug 18:28:57.5243967 count=74 decoding take microseconds = 172 Debug 18:28:57.5243967 count=80 decoding take microseconds = 174 Debug 18:28:57.5243967 count=88 decoding take microseconds = 175 Debug 18:28:57.5243967 count=55 decoding take microseconds = 131 Debug 18:28:57.5243967 count=80 decoding take microseconds = 182 Debug 18:28:57.5243967 count=80 decoding take microseconds = 183 Debug 18:28:57.5243967 count=101 decoding take microseconds = 231 Debug 18:28:57.5243967 count=58 decoding take microseconds = 134 Debug 18:28:57.5243967 count=57 decoding take microseconds = 126 Debug 18:28:57.5243967 count=57 decoding take microseconds = 134 Debug 18:28:57.5399967 count=115 decoding take microseconds = 234 Debug 18:28:57.5399967 count=106 decoding take microseconds = 225 Debug 18:28:57.5399967 count=108 decoding take microseconds = 241 Debug 18:28:57.5399967 count=84 decoding take microseconds = 177 Debug 18:28:57.5399967 count=54 decoding take microseconds = 141 Debug 18:28:57.5399967 count=84 decoding take microseconds = 186 Debug 18:28:57.5399967 count=82 decoding take microseconds = 184 Debug 18:28:57.5399967 count=82 decoding take microseconds = 179 Debug 18:28:57.5399967 count=56 decoding take microseconds = 133 Debug 18:28:57.5399967 count=57 decoding take microseconds = 127 Debug 18:28:57.5399967 count=82 decoding take microseconds = 185 Debug 18:28:57.5399967 count=76 decoding take microseconds = 178 Debug 18:28:57.5399967 count=82 decoding take microseconds = 184 Debug 18:28:57.5399967 count=54 decoding take microseconds = 139 Debug 18:28:57.5399967 count=54 decoding take microseconds = 137 Debug 18:28:57.5399967 count=81 decoding take microseconds = 184 Debug 18:28:57.5399967 count=136 decoding take microseconds = 275 Debug 18:28:57.5399967 count=55 decoding take microseconds = 138 Debug 18:28:57.5555968 count=52 decoding take microseconds = 140 Debug 18:28:57.5555968 count=53 decoding take microseconds = 136 Debug 18:28:57.5555968 count=54 decoding take microseconds = 139 Debug 18:28:57.5555968 count=55 decoding take microseconds = 138 Debug 18:28:57.5555968 count=57 decoding take microseconds = 134 Debug 18:28:57.5555968 count=53 decoding take microseconds = 136 Debug 18:28:57.5555968 count=80 decoding take microseconds = 174 Debug 18:28:57.5555968 count=74 decoding take microseconds = 175 Debug 18:28:57.5555968 count=57 decoding take microseconds = 133 Debug 18:28:57.5555968 count=57 decoding take microseconds = 149 Debug 18:28:57.5555968 count=100 decoding take microseconds = 262 Debug 18:28:57.5555968 count=56 decoding take microseconds = 156 Debug 18:28:57.5555968 count=55 decoding take microseconds = 165 From this test I see that E3-1220 is almost two times faster than i7-860. Is that possible? Because in the processors ratings these processors are about the same. Is it possible that this is because of cache or something? And if so which processor I better to buy to decode messages two more times faster?

    Read the article

  • Dig returns "status: REFUSED" for external queries?

    - by Mikey
    I can't seem to work out why my DNS isn't working properly, if I run dig from the nameserver it functions correctly: # dig ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> ungl.org ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24585 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1 ;; QUESTION SECTION: ;ungl.org. IN A ;; ANSWER SECTION: ungl.org. 38400 IN A 188.165.34.72 ;; AUTHORITY SECTION: ungl.org. 38400 IN NS ns.kimsufi.com. ungl.org. 38400 IN NS r29901.ovh.net. ;; ADDITIONAL SECTION: ns.kimsufi.com. 85529 IN A 213.186.33.199 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Sat Mar 13 01:04:06 2010 ;; MSG SIZE rcvd: 114 but when I run it from another server in the same datacenter I receive: # dig @87.98.167.208 ungl.org ; <<>> DiG 9.5.1-P2.1 <<>> @87.98.167.208 ungl.org ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 18787 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;ungl.org. IN A ;; Query time: 1 msec ;; SERVER: 87.98.167.208#53(87.98.167.208) ;; WHEN: Sat Mar 13 01:01:35 2010 ;; MSG SIZE rcvd: 26 my zone file for this domain is $ttl 38400 ungl.org. IN SOA r29901.ovh.net. mikey.aol.com. ( 201003121 10800 3600 604800 38400 ) ungl.org. IN NS r29901.ovh.net. ungl.org. IN NS ns.kimsufi.com. ungl.org. IN A 188.165.34.72 localhost. IN A 127.0.0.1 www IN A 188.165.34.72 and the named.conf.options is default: options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { ::1; }; listen-on { 127.0.0.1; }; allow-recursion { 127.0.0.1; }; }; named.conf.local: // // Do any local configuration here // // Consider adding the 1918 zones here, if they are not used in your // organization // include "/etc/bind/zones.rfc1918"; zone "eugl.eu" { type master; file "/etc/bind/eugl.eu"; notify no; }; zone "ungl.org" { type master; file "/etc/bind/ungl.org"; notify no; }; The server is running Ubuntu 9.10 and Bind 9, if anyone can shed some light on this for me it'd make me very happy! thanks

    Read the article

  • Modeling distribution of performance measurements

    - by peterchen
    How would you mathematically model the distribution of repeated real life performance measurements - "Real life" meaning you are not just looping over the code in question, but it is just a short snippet within a large application running in a typical user scenario? My experience shows that you usually have a peak around the average execution time that can be modeled adequately with a Gaussian distribution. In addition, there's a "long tail" containing outliers - often with a multiple of the average time. (The behavior is understandable considering the factors contributing to first execution penalty). My goal is to model aggregate values that reasonably reflect this, and can be calculated from aggregate values (like for the Gaussian, calculate mu and sigma from N, sum of values and sum of squares). In other terms, number of repetitions is unlimited, but memory and calculation requirements should be minimized. A normal Gaussian distribution can't model the long tail appropriately and will have the average biased strongly even by a very small percentage of outliers. I am looking for ideas, especially if this has been attempted/analysed before. I've checked various distributions models, and I think I could work out something, but my statistics is rusty and I might end up with an overblown solution. Oh, a complete shrink-wrapped solution would be fine, too ;) Other aspects / ideas: Sometimes you get "two humps" distributions, which would be acceptable in my scenario with a single mu/sigma covering both, but ideally would be identified separately. Extrapolating this, another approach would be a "floating probability density calculation" that uses only a limited buffer and adjusts automatically to the range (due to the long tail, bins may not be spaced evenly) - haven't found anything, but with some assumptions about the distribution it should be possible in principle. Why (since it was asked) - For a complex process we need to make guarantees such as "only 0.1% of runs exceed a limit of 3 seconds, and the average processing time is 2.8 seconds". The performance of an isolated piece of code can be very different from a normal run-time environment involving varying levels of disk and network access, background services, scheduled events that occur within a day, etc. This can be solved trivially by accumulating all data. However, to accumulate this data in production, the data produced needs to be limited. For analysis of isolated pieces of code, a gaussian deviation plus first run penalty is ok. That doesn't work anymore for the distributions found above. [edit] I've already got very good answers (and finally - maybe - some time to work on this). I'm starting a bounty to look for more input / ideas.

    Read the article

  • Windows DNS Server 2008 R2 fallaciously returns SERVFAIL

    - by Easter Sunshine
    I have a Windows 2008 R2 domain controller which is also a DNS server. When resolving certain TLDs, it returns a SERVFAIL: $ dig bogus. ; <<>> DiG 9.8.1 <<>> bogus. ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31919 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A I get the same result for a real TLD like com. when querying the DC as shown above. Compare to a BIND server that is working as expected: $ dig bogus. @128.59.59.70 ; <<>> DiG 9.8.1 <<>> bogus. @128.59.59.70 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 30141 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;bogus. IN A ;; AUTHORITY SECTION: . 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2012012501 1800 900 604800 86400 ;; Query time: 18 msec ;; SERVER: 128.59.59.70#53(128.59.59.70) ;; WHEN: Wed Jan 25 14:09:14 2012 ;; MSG SIZE rcvd: 98 Similarly, when I query my Windows DNS server with dig . any, I get a SERVFAIL but the BIND servers return the root zone as expected. This sounds similar to the issue described in http://support.microsoft.com/kb/968372 except I am using two forwarders (128.59.59.70 from above as well as 128.59.62.10) and falling back to root hints so the preconditions to expose the issue are not the same. Nevertheless, I also applied the MaxCacheTTL registry fix as described and restarted DNS and the whole server as well but the problem persists. The problem occurs on all domain controllers in this domain and has occurred since half a year ago, even though the servers are getting automatic Windows updates. EDIT Here is a debug log. The client is 160.39.114.110, which is my workstation. 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Rcv 160.39.114.110 2e94 Q [0001 D NOERROR] A (5)bogus(0) UDP question info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x0100 QR 0 (QUESTION) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 0 Z 0 CD 0 AD 0 RCODE 0 (NOERROR) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty 1/25/2012 2:16:01 PM 0E08 PACKET 000000001EA6BFD0 UDP Snd 160.39.114.110 2e94 R Q [8281 DR SERVFAIL] A (5)bogus(0) UDP response info at 000000001EA6BFD0 Socket = 508 Remote addr 160.39.114.110, port 49710 Time Query=1077016, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x0017 (23) Message: XID 0x2e94 Flags 0x8182 QR 1 (RESPONSE) OPCODE 0 (QUERY) AA 0 TC 0 RD 1 RA 1 Z 0 CD 0 AD 0 RCODE 2 (SERVFAIL) QCOUNT 1 ACOUNT 0 NSCOUNT 0 ARCOUNT 0 QUESTION SECTION: Offset = 0x000c, RR count = 0 Name "(5)bogus(0)" QTYPE A (1) QCLASS 1 ANSWER SECTION: empty AUTHORITY SECTION: empty ADDITIONAL SECTION: empty Every option in the debug log box was checked except "filter by IP". By contrast, when I query, say, accounts.google.com, I can see the DNS server go out to its forwarder (128.59.59.70, for example). In this case, I didn't see any packets going out from my DNS server even though bogus. was not in the cache (the debug log was already running and this is the first time I queried this server for bogus. or any TLD). It just returned SERVFAIL without consulting any other DNS server, as in the Microsoft KB article linked above.

    Read the article

  • mciSendString cannot save to directory path

    - by robUK
    Hello, VS C# 2008 SP1 I have a created a small application that records and plays audio. However, my application needs to save the wave file to the application data directory on the users computer. The mciSendString takes a C style string as a parameter and has to be in 8.3 format. However, my problem is I can't get it to save. And what is strange is sometime it does and sometimes it doesn't. Howver, most of the time is failes. However, if I save directly to the C drive it works first time everything. I have used 3 different methods that I have coded below. The error number that I get when it fails is 286."The file was not saved. Make sure your system has sufficient disk space or has an intact network connection" Many thanks for any suggestins, [DllImport("winmm.dll",CharSet=CharSet.Auto)] private static extern uint mciSendString([MarshalAs(UnmanagedType.LPTStr)] string command, StringBuilder returnValue, int returnLength, IntPtr winHandle); [DllImport("winmm.dll", CharSet = CharSet.Auto)] private static extern int mciGetErrorString(uint errorCode, StringBuilder errorText, int errorTextSize); [DllImport("Kernel32.dll", CharSet=CharSet.Auto)] private static extern int GetShortPathName([MarshalAs(UnmanagedType.LPTStr)] string longPath, [MarshalAs(UnmanagedType.LPTStr)] StringBuilder shortPath, int length); // Stop recording private void StopRecording() { // Save recorded voice string shortPath = this.shortPathName(); string formatShortPath = string.Format("save recsound \"{0}\"", shortPath); uint result = 0; StringBuilder errorTest = new StringBuilder(256); // C:\DOCUME~1\Steve\APPLIC~1\Test.wav // Fails result = mciSendString(string.Format("{0}", formatShortPath), null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // command line convention - fails result = mciSendString("save recsound \"C:\\DOCUME~1\\Steve\\APPLIC~1\\Test.wav\"", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // 8.3 short format - fails result = mciSendString(@"save recsound C:\DOCUME~1\Steve\APPLIC~1\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); // Save to C drive works everytime. result = mciSendString(@"save recsound C:\Test.wav", null, 0, IntPtr.Zero); mciGetErrorString(result, errorTest, errorTest.Length); mciSendString("close recsound ", null, 0, IntPtr.Zero); } // Get the short path name so that the mciSendString can save the recorded wave file private string shortPathName() { string shortPath = string.Empty; long length = 0; StringBuilder buffer = new StringBuilder(256); // Get the length of the path length = GetShortPathName(this.saveRecordingPath, buffer, 256); shortPath = buffer.ToString(); return shortPath; }

    Read the article

  • MySQL 5.1.49 freezing every two days

    - by maximus
    Hi all, our mysql system is "freezing" every two days. By "freezing" i mean the following: it doesn't respond to ping we can't login with SSH we don't get any answer from MySQL there is no entry in the error logs! neither from linux neither from MySQL. we have already changed to a completely new hardware, we have the same problem, so it's definitely not a hardware problem. we do not have any other software installed except a firewall (iptables rule) we can restart the server from another server using rsyslog (www.rsyslog.com)(software reset) Could someone help me, by giving me some pointers what could i do to figure out the problem? I have included every detail about our settings. Thank you in advance for your help. Max. Our system parameters and settings: System-Memory: 12GB Processor: Intel 7-920 Quadcore Operating system: Debian 5 (lenny) 64bit MySQL 5.1.49 Databases: (a) a small phpbb forum (b) a 6GB database 3 tables with about 15 million rows my.cnf # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = our-ip-address # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 256K thread_cache_size = 32 max_connections = 300 table_cache = 2048 #thread_concurrency = 4 # Used for InnoDB tables recommended to 50%-80% available memory innodb_buffer_pool_size = 6G # 20MB sometimes larger innodb_additional_mem_pool_size = 20M # 8M-16M is good for most situations innodb_log_buffer_size = 8M # Disable XA support because we do not use it innodb-support-xa = 0 # 1 is default wich is 100% secure but 2 offers better performance innodb_flush_log_at_trx_commit = 1 innodb_flush_method = O_DIRECT #innodb_thread_concurency = 8 # Recommended 64M - 512M depending on server size innodb_log_file_size = 512M # One file per table innodb_file_per_table # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M #query_cache_type = 1 #query_cache_min_res_unit= 2K #join_buffer_size = 1M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog. This is a Debian improvement :) # # Here you can see queries with especially long duration log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 2 log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. #server-id = 1 log_bin = /var/log/mysql/mysql-bin.log # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian! expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # * InnoDB plugin # As of MySQL 5.1.38, the InnoDB plugin from Oracle is included in the MySQL source code. # It has many improvements and better performances than the built-in InnoDB storage engine. # Please read http://www.innodb.com/products/innodb_plugin/ for more information. # Uncommenting the two following lines to use the InnoDB plugin. ignore_builtin_innodb plugin-load=innodb=ha_innodb_plugin.so # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * NDB Cluster # # See /usr/share/doc/mysql-server-*/README.Debian for more information. # # The following configuration is read by the NDB Data Nodes (ndbd processes) # not from the NDB Management Nodes (ndb_mgmd processes). # # [MYSQL_CLUSTER] # ndb-connectstring=127.0.0.1 # # * IMPORTANT: Additional settings that can override those from this file! # !includedir /etc/mysql/conf.d/ UPDATE After installing sysstat and configuring it to collect data after every minute i have the following datas. I used sar to generate the following output: The log-file is too big so coudn't enter it here but uploaded to box.net. The link is http://www.box.net/shared/xc6rh7qqob SECOND UPDATE We started a ping command in the background, and that solved the problem. Now the server does work since more then a week. We still don't know what's the problem.

    Read the article

  • Python Mechanize unable to avoid redirect when Post

    - by Enric Geijo
    I am trying to crawl a site using mechanize. The site provides search results in different pages. When posting to get the next set of results, something is wrong and the server redirects me to the first page, asking mechanize to update the SearchSession Cookie. I have been debugging the requests using Firefox and they look quite the same, and I am unable to find the problem. Any suggestion? Below the requests: ----------- FIRST THE RIGHT SEQUENCE, USING TAMPER IN FIREFOX ------------------------- POST XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London Load Flags[LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI ] Content Size[-1] Mime Type[text/html] Request Headers: Host[www.cwjobs.co.uk] User-Agent[Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100401 Ubuntu/9.10 (karmic) Firefox/3.5.9] Accept[text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8] Accept-Language[en-us,en;q=0.5] Accept-Encoding[gzip,deflate] Accept-Charset[ISO-8859-1,utf-8;q=0.7,*;q=0.7] Keep-Alive[300] Connection[keep-alive] Referer[XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London] Cookie[ecos=774803468-0; AnonymousUser=MemberId=acc079dd-66b6-4081-9b07-60d6955ee8bf&IsAnonymous=True; PJBIPPOPUP=; WT_FPC=id=86.181.183.106-2262469600.30073025:lv=1272812851736:ss=1272812789362; SearchSession=SessionGuid=71de63de-3bd0-4787-895d-b6b9e7c93801&LogSource=NAT] Post Data: __EVENTTARGET[srpPager%24btnForward] __EVENTARGUMENT[] hdnSearchResults[BV%2CA%2CC0P5x%2COou-%2CB4S-%2CBuC-%2CDzx-%2CHwn-%2CKPP-%2CIVA-%2CC9D-%2CH6X-%2CH7x-%2CJ0x-%2CCvX-%2CCra-%2COHa-%2CHhP-%2CCoj-%2CBlM-%2CE9W-%2CIm8-%2CBqG-%2CPFy-%2CN%2Fm-%2Ceaa%2CCvj-%2CCtJ-%2CCr7-%2CBpu-%2Cmh%2CMb6-%2CJ%2Fk-%2CHY8-%2COJ7-%2CNtF-%2CEya-%2CErT-%2CEo4-%2CEKU-%2CDnL-%2CC5M-%2CCyB-%2CBsD-%2CBrc-%2CBpU-%2Col%2C30%2CC1%2Cd4N%2COo8-%2COi0-%2CLz%2F-%2CLxP-%2CFyp-%2CFVR-%2CEHL-%2CPrP-%2CLmE-%2CK3H-%2CKXJ-%2CFyn%2CIcq-%2CIco-%2CIK4-%2CIIg-%2CH2k-%2CH0N-%2CHwp-%2CHvF-%2CFij-%2CFhl-%2CCwj-%2CCb5-%2CCQj-%2CCQh-%2CB%2B2-%2CBc6-%2ChFo%2CNLq-%2CNI%2F-%2CFzM-%2Cdu-%2CHg2-%2CBug-%2CBse-%2CB9Q-] __VIEWSTATE[%2FwEPDwUKLTkyMzI2ODA4Ng9kFgYCBA8WBB4EaHJlZgWJAWh0dHA6Ly93d3cuY3dqb2JzLmNvLnVrL0pvYlNlYXJjaC9SU1MuYXNweD9LZXl3b3Jkcz1QeXRob24mTFR4dD1Mb25kb24lMmMrU291dGgrRWFzdCZSYWRpdXM9MCZMSWRzMj1aViZjbGlkPTE2MjEmY2x0eXBlaWQ9MiZjbE5hbWU9TG9uZG9uHgV0aXRsZQUkTGF0ZXN0IFB5dGhvbiBqb2JzIGZyb20gQ1dKb2JzLmNvLnVrZAIGDxYCHgRUZXh0BV48bGluayByZWw9ImNhbm9uaWNhbCIgaHJlZj0iaHR0cDovL3d3dy5jd2pvYnMuY28udWsvSm9iU2Vla2luZy9QeXRob25fTG9uZG9uX2wxNjIxX3QyLmh0bWwiIC8%2BZAIIEGRkFg4CBw8WAh8CBV9Zb3VyIHNlYXJjaCBvbiA8Yj5LZXl3b3JkczogUHl0aG9uOyBMb2NhdGlvbjogTG9uZG9uLCBTb3V0aCBFYXN0OyA8L2I%2BIHJldHVybmVkIDxiPjg1PC9iPiBqb2JzLmQCCQ8WAh4HVmlzaWJsZWhkAgsPFgIfAgUoVGhlIG1vc3QgcmVsZXZhbnQgam9icyBhcmUgbGlzdGVkIGZpcnN0LmQCEw8PFgIeC05hdmlnYXRlVXJsBQF%2BZGQCFQ9kFgYCBQ8PFgYfAgUGUHl0aG9uHgtEZWZhdWx0VGV4dAUMZS5nLiBhbmFseXN0HhNEZWZhdWx0VGV4dENzc0NsYXNzZWRkAgsPDxYGHwIFEkxvbmRvbiwgU291dGggRWFzdB8FBQllLmcuIEJhdGgfBmVkZAIRDxAPFgYeDURhdGFUZXh0RmllbGQFClJhZGl1c05hbWUeDkRhdGFWYWx1ZUZpZWxkBQZSYWRpdXMeC18hRGF0YUJvdW5kZ2QQFREHMCBtaWxlcwcyIG1pbGVzBzUgbWlsZXMIMTAgbWlsZXMIMTUgbWlsZXMIMjAgbWlsZXMIMjUgbWlsZXMIMzAgbWlsZXMIMzUgbWlsZXMINDAgbWlsZXMINDUgbWlsZXMINTAgbWlsZXMINjAgbWlsZXMINzAgbWlsZXMIODAgbWlsZXMIOTAgbWlsZXMJMTAwIG1pbGVzFREBMAEyATUCMTACMTUCMjACMjUCMzACMzUCNDACNDUCNTACNjACNzACODACOTADMTAwFCsDEWdnZ2dnZ2dnZ2dnZ2dnZ2dnZGQCFw9kFgQCAQ9kFgQCBA8QZA8WA2YCAQICFgMQBQhBbGwgam9icwUBMGcQBRlEaXJlY3QgZW1wbG95ZXIgam9icyBvbmx5BQEyZxAFEEFnZW5jeSBqb2JzIG9ubHkFATFnZGQCBg8QZA8WA2YCAQICFgMQBQlSZWxldmFuY2UFATFnEAUERGF0ZQUBMmcQBQZTYWxhcnkFATNnZGQCBQ8PFgYeClBhZ2VOdW1iZXICAh4PTnVtYmVyT2ZSZXN1bHRzAlUeDlJlc3VsdHNQZXJQYWdlAhRkZAIZDxYCHwNoZGQ%3D] Refinesearch%24txtKeywords[Python] Refinesearch%24txtLocation[London%2C+South+East] Refinesearch%24ddlRadius[0] ddlCompanyType[0] ddlSort[1] Response Headers: Cache-Control[private] Date[Sun, 02 May 2010 16:09:27 GMT] Content-Type[text/html; charset=utf-8] Expires[Sat, 02 May 2009 16:09:27 GMT] Server[Microsoft-IIS/6.0] X-SiteConHost[P310] X-Powered-By[ASP.NET] X-AspNet-Version[2.0.50727] Set-Cookie[SearchSession=SessionGuid=71de63de-3bd0-4787-895d-b6b9e7c93801&LogSource=NAT; path=/] Content-Encoding[gzip] Vary[Accept-Encoding] Transfer-Encoding[chunked] -------- NOW WHAT I'AM SENDING USING MECHANIZE, SOME HEADERS ADDED, ETC ----------- POST /JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London HTTP/1.1\r\nContent-Length: 2424\r\n Accept-Language: en-us,en;q=0.5\r\n Accept-Encoding: gzip\r\n Host: www.cwjobs.co.uk\r\n Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8\r\n Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n Connection: keep-alive\r\n Cookie: AnonymousUser=MemberId=8fa5ddd7-17ed-425e-b189-82693bfbaa0c&IsAnonymous=True; SearchSession=SessionGuid=33e4e439-c2d6-423f-900f-574099310d5a&LogSource=NAT\r\n Referer: XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London\r\n Content-Type: application/x-www-form-urlencoded\r\n\r\n' '__EVENTTARGET=srpPager%24btnForward& __EVENTARGUMENT=& hdnSearchResults=BV%2CA%2CC0eif%2CMwc%2CM6s%2COou%2CK09%2CG4H%2CEZf%2CGTu%2CLrr%2CGuX%2CGs9%2CEz9%2CL5X%2CL9U%2ChU%2CHHf%2CMAL%2CNDi%2CJrY%2CGBy%2CM%2Bo%2CdE-%2CpI%2CtDI%2CL5L%2CL7l%2CL8z%2CM%2FA%2CPPP%2CCM0%2CEpK%2CHPy%2Cez%2C7p%2CJ2U%2CJ9b%2CJ%2F2%2CKea%2CLBj%2CLvi%2CL2t%2CM8r%2CM9S%2CM%2Fa%2CPRT%2CPgi%2Csg7%2CF6%2CI2F%2CJTd%2CO-%2CC0v%2CC3f%2CDCq%2CDxn%2CERl%2CUbV%2CGME%2CGMG%2CGd2%2CGgO%2CGyK%2CG0h%2CG4F%2CG5p%2CJGL%2CJHJ%2CKhj%2CL4L%2CMM1%2CMYL%2CMYN%2CMp4%2CNL0%2COrj%2CvuW%2CBdE%2CBfv%2CI1i%2CBCh-%2COLA%2CHH4%2CM6O%2CM8Q%2CMre& __VIEWSTATE=%2FwEPDwUKLTkyMzI2ODA4Ng9kFgYCBA8WBB4EaHJlZgWJAWh0dHA6Ly93d3cuY3dqb2JzLmNvLnVrL0pvYlNlYXJjaC9SU1MuYXNweD9LZXl3b3Jkcz1QeXRob24mTFR4dD1Mb25kb24lMmMrU291dGgrRWFzdCZSYWRpdXM9MCZMSWRzMj1aViZjbGlkPTE2MjEmY2x0eXBlaWQ9MiZjbE5hbWU9TG9uZG9uHgV0aXRsZQUkTGF0ZXN0IFB5dGhvbiBqb2JzIGZyb20gQ1dKb2JzLmNvLnVrZAIGDxYCHgRUZXh0BV48bGluayByZWw9ImNhbm9uaWNhbCIgaHJlZj0iaHR0cDovL3d3dy5jd2pvYnMuY28udWsvSm9iU2Vla2luZy9QeXRob25fTG9uZG9uX2wxNjIxX3QyLmh0bWwiIC8%2BZAIIEGRkFg4CBw8WAh8CBV9Zb3VyIHNlYXJjaCBvbiA8Yj5LZXl3b3JkczogUHl0aG9uOyBMb2NhdGlvbjogTG9uZG9uLCBTb3V0aCBFYXN0OyA8L2I%2BIHJldHVybmVkIDxiPjg1PC9iPiBqb2JzLmQCCQ8WAh4HVmlzaWJsZWhkAgsPFgIfAgUoVGhlIG1vc3QgcmVsZXZhbnQgam9icyBhcmUgbGlzdGVkIGZpcnN0LmQCEw8PFgIeC05hdmlnYXRlVXJsBQF%2BZGQCFQ9kFgYCBQ8PFgYfAgUGUHl0aG9uHgtEZWZhdWx0VGV4dAUMZS5nLiBhbmFseXN0HhNEZWZhdWx0VGV4dENzc0NsYXNzZWRkAgsPDxYGHwIFEkxvbmRvbiwgU291dGggRWFzdB8FBQllLmcuIEJhdGgfBmVkZAIRDxAPFgYeDURhdGFUZXh0RmllbGQFClJhZGl1c05hbWUeDkRhdGFWYWx1ZUZpZWxkBQZSYWRpdXMeC18hRGF0YUJvdW5kZ2QQFREHMCBtaWxlcwcyIG1pbGVzBzUgbWlsZXMIMTAgbWlsZXMIMTUgbWlsZXMIMjAgbWlsZXMIMjUgbWlsZXMIMzAgbWlsZXMIMzUgbWlsZXMINDAgbWlsZXMINDUgbWlsZXMINTAgbWlsZXMINjAgbWlsZXMINzAgbWlsZXMIODAgbWlsZXMIOTAgbWlsZXMJMTAwIG1pbGVzFREBMAEyATUCMTACMTUCMjACMjUCMzACMzUCNDACNDUCNTACNjACNzACODACOTADMTAwFCsDEWdnZ2dnZ2dnZ2dnZ2dnZ2dnZGQCFw9kFgQCAQ9kFgQCBA8QZA8WA2YCAQICFgMQBQhBbGwgam9icwUBMGcQBRlEaXJlY3QgZW1wbG95ZXIgam9icyBvbmx5BQEyZxAFEEFnZW5jeSBqb2JzIG9ubHkFATFnZGQCBg8QZA8WA2YCAQICFgMQBQlSZWxldmFuY2UFATFnEAUERGF0ZQUBMmcQBQZTYWxhcnkFATNnZGQCBQ8PFgYeClBhZ2VOdW1iZXICAR4PTnVtYmVyT2ZSZXN1bHRzAlUeDlJlc3VsdHNQZXJQYWdlAhRkZAIZDxYCHwNoZGQ%3D& Refinesearch%24txtKeywords=Python& Refinesearch%24txtLocation=London%2CSouth+East& Refinesearch%24ddlRadius=0& Refinesearch%24btnSearch=Search& ddlCompanyType=0& ddlSort=1'

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >