Search Results

Search found 14052 results on 563 pages for 'response filter'.

Page 135/563 | < Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >

  • WebLogic Server Performance and Tuning: Part II - Thread Management

    - by Gokhan Gungor
    WebLogic Server, like any other java application server, provides resources so that your applications use them to provide services. Unfortunately none of these resources are unlimited and they must be managed carefully. One of these resources is threads which are pooled to provide better throughput and performance along with the fast response time and to avoid deadlocks. Threads are execution points that WebLogic Server delivers its power and execute work. Managing threads is very important because it may affect the overall performance of the entire system. In previous releases of WebLogic Server 9.0 we had multiple execute queues and user defined thread pools. There were different queues for different type of work which had fixed number of execute threads.  Tuning of this thread pools and finding the proper number of threads was time consuming which required many trials. WebLogic Server 9.0 and the following releases use a single thread pool and a single priority-based execute queue. All type of work is executed in this single thread pool. Its size (thread count) is automatically decreased or increased (self-tuned). The new “self-tuning” system simplifies getting the proper number of threads and utilizing them.Work manager allows your applications to run concurrently in multiple threads. Work manager is a mechanism that allows you to manage and utilize threads and create rules/guidelines to follow when assigning requests to threads. We can set a scheduling guideline or priority a request with a work manager and then associate this work manager with one or more applications. At run-time, WebLogic Server uses these guidelines to assign pending work/requests to execution threads. The position of a request in the execute queue is determined by its priority. There is a default work manager that is provided. The default work manager should be sufficient for most applications. However there can be cases you want to change this default configuration. Your application(s) may be providing services that need mixture of fast response time and long running processes like batch updates. However wrong configuration of work managers can lead a performance penalty while expecting improvement.We can define/configure work managers at;•    Domain Level: config.xml•    Application Level: weblogic-application.xml •    Component Level: weblogic-ejb-jar.xml or weblogic.xml(For a specific web application use weblogic.xml)We can use the following predefined rules/constraints to manage the work;•    Fair Share Request Class: Specifies the average thread-use time required to process requests. The default is 50.•    Response Time Request Class: Specifies a response time goal in milliseconds.•    Context Request Class: Assigns request classes to requests based on context information.•    Min Threads Constraint: Limits the number of concurrent threads executing requests.•    Max Threads Constraint: Guarantees the number of threads the server will allocate to requests.•    Capacity Constraint: Causes the server to reject requests only when it has reached its capacity. Let’s create a work manager for our application for a long running work.Go to WebLogic console and select Environment | Work Managers from the domain structure tree. Click New button and select Work manager and click next. Enter the name for the work manager and click next. Then select the managed server instances(s) or clusters from available targets (the one that your long running application is deployed) and finish. Click on MyWorkManager, and open the Configuration tab and check Ignore Stuck Threads and save. This will prevent WebLogic to tread long running processes (that is taking more than a specified time) as stuck and enable to finish the process.

    Read the article

  • Can anybody help me in designing my UITableView into MVC Pattern ?

    - by user2877880
    I have written a ViewController in which i get data from the internet and display it in a UItableview using a json parser which uses object for key to identify its objects. What i would like your help in is to convert it into MVC pattern to make it less clumsy instead of including everything in the same controller class. Please try explaining it to me in terms of my code. THANKS IN ADVANCE. The code is as given below #import "ViewController.h" #import "AFNetworking.h" #import "ModelTableArray.h" @implementation ViewController @synthesize tableView = _tableView, activityIndicatorView = _activityIndicatorView, movies = _movies; - (void)viewDidLoad { [super viewDidLoad]; // Setting Up Table View self.tableView = [[UITableView alloc] initWithFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height) style:UITableViewStylePlain]; self.tableView.dataSource = self; self.tableView.delegate = self; self.tableView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight; self.tableView.hidden = YES; [self.view addSubview:self.tableView]; // Setting Up Activity Indicator View self.activityIndicatorView = [[UIActivityIndicatorView alloc] initWithActivityIndicatorStyle:UIActivityIndicatorViewStyleGray]; self.activityIndicatorView.hidesWhenStopped = YES; self.activityIndicatorView.center = self.view.center; [self.view addSubview:self.activityIndicatorView]; [self.activityIndicatorView startAnimating]; // Initializing Data Source self.movies = [[NSArray alloc] init]; NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rocky&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; UIRefreshControl *refreshControl = [[UIRefreshControl alloc] init]; [refreshControl addTarget:self action:@selector(refresh:) forControlEvents:UIControlEventValueChanged]; [self.tableView addSubview:refreshControl]; [refreshControl endRefreshing]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; } - (void)refresh:(UIRefreshControl *)sender { NSURL *url = [[NSURL alloc] initWithString:@"http://itunes.apple.com/search?term=rambo&country=us&entity=movie"]; NSURLRequest *request = [[NSURLRequest alloc] initWithURL:url]; AFJSONRequestOperation *operation = [AFJSONRequestOperation JSONRequestOperationWithRequest:request success:^(NSURLRequest *request, NSHTTPURLResponse *response, id JSON) { self.movies = [JSON objectForKey:@"results"]; [self.activityIndicatorView stopAnimating]; [self.tableView setHidden:NO]; [self.tableView reloadData]; } failure:^(NSURLRequest *request, NSHTTPURLResponse *response, NSError *error, id JSON) { NSLog(@"Request Failed with Error: %@, %@", error, error.userInfo); }]; [operation start]; [sender endRefreshing]; } - (void)viewDidUnload { [super viewDidUnload]; } - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { return YES; } // Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (self.movies && self.movies.count) { return self.movies.count; } else { return 0; } } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *cellID = @"Cell Identifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:cellID]; if (!cell) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:cellID]; } NSDictionary *movie = [self.movies objectAtIndex:indexPath.row]; cell.textLabel.text = [movie objectForKey:@"trackName"]; cell.detailTextLabel.text = [movie objectForKey:@"artistName"]; NSURL *url = [[NSURL alloc] initWithString:[movie objectForKey:@"artworkUrl100"]]; [cell.imageView setImageWithURL:url placeholderImage:[UIImage imageNamed:@"placeholder"]]; return cell; } @end

    Read the article

  • Magento, NGINX, PHP-FPM, APC, MEMCACHED, 16gb Ram CentOS, Spiking PHP-FPM to 100% CPU

    - by Terry Dunford
    I have been trying to resolve my issue of spiking cpu caused by php-fpm processes. I've reduced the php-fpm config settings to: pm = ondemand pm.max_children = 12 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 10 pm.max_requests = 500 php_admin_value[memory_limit] = 128M Problem still exists. I'm running a Joomla main site (which is having no problems) and a Magento store in a sub-directory. My server is a Linux CentOS, running NGINX, APC, Memcached, Full Page Cache and php-fpm. My server has 8 cores and 16gb dedicated ram. My host has shut down my server several times the past week because my php-fpm processes are consuming the entire network. A lot of the individual php-fpm processes are getting over 50% cpu. I've hired several "professionals" and none of them was able to help me, so now broke and stumped, I'm turning to you guys for help. So any suggestions would be greatly appreciated. I turned on slow php logs and here are some of the latest results: [01-Apr-2012 14:26:12] [pool magento] pid 21537 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a394f8] _renderStraightjoin() /home/flyfish/www/flyshop/lib/Varien/Db/Select.php:397 [0x0000000011a39158] _renderStraightjoin() /home/flyfish/www/flyshop/lib/Zend/Db/Select.php:705 [0x0000000011a38f30] assemble() /home/flyfish/www/flyshop/lib/Zend/Db/Select.php:1343 [0x00007fffbb6d6e50] __toString() unknown:0 [0x0000000011a38630] _prepareQuery() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:409 [0x0000000011a38270] _prepareQuery() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:388 [0x0000000011a38008] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000011a375c8] fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:196 [0x0000000011a370e0] _loadLabels() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:129 [0x0000000011a369a0] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x0000000011a364a8] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x0000000011a35968] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x0000000011a35590] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x0000000011a35410] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x0000000011a35098] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x0000000011a34fa8] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x0000000011a33998] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x0000000011a331f0] +++ dump failed [01-Apr-2012 14:26:44] [pool magento] pid 21531 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a37768] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:251 [0x0000000011a37280] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:132 [0x0000000011a36b40] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x0000000011a36648] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x0000000011a35b08] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x0000000011a35730] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x0000000011a355b0] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x0000000011a35238] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x0000000011a35148] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x0000000011a33b38] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x0000000011a33390] +++ dump failed [01-Apr-2012 14:27:01] [pool magento] pid 21528 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011ff67a8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011ff6518] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011ff5e90] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011ff5a20] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011ff5438] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011ff5078] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011ff4e98] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:825 [0x0000000011ff4948] fetchOne() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:1161 [0x0000000011ff4678] getProductCount() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Category.php:801 [0x0000000011ff33e0] getProductCount() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:54 [0x0000000011ff2da0] _initItemsData() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:23 [0x0000000011ff2818] _getItemsData() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:119 [0x0000000011ff26b0] _initItems() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Layer/Filter/Abstract.php:120 [0x0000000011ff2598] getItems() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Layer/Filter/Abstract.php:109 [0x0000000011ff2480] getItemsCount() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Block/Layer/Filter/Abstract.php:126 [0x0000000011ff22b8] getItemsCount() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Layer/View/67dcc5dfa9c44bd3a205b75a08193105.php:218 [0x0000000011ff2088] canShowOptions() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Layer/View/67dcc5dfa9c44bd3a205b75a08193105.php:233 [0x0000000011ff14f8] canShowBlock() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/extendware/ewlayerednav/catalog/layer/view.phtml:6 [0x0000000011ff0d50] +++ dump failed [01-Apr-2012 14:27:04] [pool magento] pid 21529 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000012468ff8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000012468d68] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x00000000124686e0] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000012468270] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000012467c88] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x00000000124678c8] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000012467660] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000012467248] fetchAll() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:687 [0x00000000124668f0] _fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Collection/Abstract.php:1045 [0x0000000012466288] _loadEntities() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Collection/Abstract.php:869 [0x0000000012465fb0] load() /home/flyfish/www/flyshop/app/code/core/Mage/Review/Model/Observer.php:78 [0x0000000012465d10] catalogBlockProductCollectionBeforeToHtml() /home/flyfish/www/flyshop/app/code/core/Mage/Core/Model/App.php:1303 [0x0000000012464c28] _callObserverMethod() /home/flyfish/www/flyshop/app/code/core/Mage/Core/Model/App.php:1278 [0x00000000124649e0] dispatchEvent() /home/flyfish/www/flyshop/app/Mage.php:416 [0x0000000012464290] dispatchEvent() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Block/Product/List.php:163 [0x0000000012463760] _beforeToHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:864 [0x00000000124633b0] toHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:584 [0x0000000012462e30] _getChildHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:528 [0x0000000012462d38] getChildHtml() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Category/View/6362e7526f5dcb27e7f8b0b414b59004.php:85 [0x00000000124629f0] getProductListHtml() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Block/Override/Mage/Catalog/Category/View.php:20 [01-Apr-2012 14:27:55] [pool magento] pid 21536 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a35010] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011a34d80] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011a346f8] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011a34288] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011a33ca0] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011a338e0] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011a33700] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:825 [0x0000000011a33368] fetchOne() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Type.php:71 [0x0000000011a33238] getAdditionalAttributeTable() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:483 [0x0000000011a32be8] getAdditionalAttributeTable() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:500 [0x0000000011a32860] _afterLoad() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:108 [0x0000000011a32330] loadByCode() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Attribute/Abstract.php:118 [0x0000000011a31350] loadByCode() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Config.php:423 [0x0000000011a30ce8] getAttribute() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Helper/Output.php:156 [0x0000000011a30208] categoryAttribute() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/catalog/category/view.phtml:47 [0x0000000011a2fa60] +++ dump failed [01-Apr-2012 14:27:56] [pool magento] pid 21530 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a35b10] updateParamDefaults() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:276 [0x0000000011a35750] updateParamDefaults() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:326 [0x0000000011a351f0] getSkinBaseUrl() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:482 [0x0000000011a350a8] getSkinUrl() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:981 [0x0000000011a32468] getSkinUrl() /home/flyfish/www/flyshop/app/code/local/Extendware/EWMinify/Block/Override/Mage/Page/Html/Head.php:126 [0x0000000011a30ca8] getCssJsHtml() /home/flyfish/www/flyshop/app/code/local/Extendware/EWCore/Block/Override/Mage/Page/Html/Head.php:55 [0x0000000011a30978] getCssJsHtml() /home/flyfish/www/flyshop/app/code/local/MageWorx/SeoSuite/Block/Page/Html/Head.php:41 [0x0000000011a2fd10] getCssJsHtml() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-modalheader/rokmage-head.phtml:26 [0x0000000011a2f568] +++ dump failed [01-Apr-2012 14:28:28] [pool magento] pid 21527 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000010c7bba0] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000010c7b910] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000010c7b288] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000010c7ae18] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000010c7a830] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000010c7a470] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000010c7a168] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:808 [0x0000000010c79558] fetchPairs() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Collection.php:840 [0x0000000010c79240] addCountToCategories() /home/flyfish/www/flyshop/app/code/community/Mage/Catalog/Block/Navigation.php:133 [0x0000000010c71d48] getCurrentChildCategories() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/rokmagemodules/rokmage-magemenus/rokmage-magemenu-left.phtml:139 [0x0000000010c715a0] +++ dump failed [01-Apr-2012 14:28:28] [pool magento] pid 21577 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a3a8d8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011a3a648] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011a39fc0] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011a39b50] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011a39568] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011a391a8] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011a38f40] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000011a37cc0] fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:276 [0x0000000011a37b20] _loadNodes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:1229 [0x0000000011a379a0] getChildrenCategories() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Category.php:841 [0x0000000011a37690] getChildrenCategories() /home/flyfish/www/flyshop/app/code/community/Mage/Catalog/Block/Navigation.php:130 [0x0000000011a30198] getCurrentChildCategories() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/rokmagemodules/rokmage-magemenus/rokmage-magemenu-left.phtml:139 [0x0000000011a2f9f0] +++ dump failed [01-Apr-2012 14:28:48] [pool magento] pid 21629 script_filename = /home/flyfish/www/flyshop/index.php [0x00002ac987e2cb48] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:252 [0x00002ac987e2c660] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:132 [0x00002ac987e2bf20] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x00002ac987e2ba28] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x00002ac987e2aee8] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x00002ac987e2ab10] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x00002ac987e2a990] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x00002ac987e2a618] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x00002ac987e2a528] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x00002ac987e28f18] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x00002ac987e28770] +++ dump failed ___________________________________________ A snippet of the Latest php-fpm error log: [01-Apr-2012 14:26:12] WARNING: [pool magento] child 21537, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.265105 sec), logging [01-Apr-2012 14:26:12] ERROR: failed to ptrace(PEEKDATA) pid 21537: Input/output error (5) [01-Apr-2012 14:26:44] WARNING: [pool magento] child 21531, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.268434 sec), logging [01-Apr-2012 14:26:44] ERROR: failed to ptrace(PEEKDATA) pid 21531: Input/output error (5) [01-Apr-2012 14:27:01] WARNING: [pool magento] child 21528, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (6.656633 sec), logging [01-Apr-2012 14:27:01] ERROR: failed to ptrace(PEEKDATA) pid 21528: Input/output error (5) [01-Apr-2012 14:27:04] WARNING: [pool magento] child 21529, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.211136 sec), logging [01-Apr-2012 14:27:55] WARNING: [pool magento] child 21536, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.207001 sec), logging [01-Apr-2012 14:27:55] ERROR: failed to ptrace(PEEKDATA) pid 21536: Input/output error (5) [01-Apr-2012 14:27:56] WARNING: [pool magento] child 21530, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.503186 sec), logging [01-Apr-2012 14:27:56] ERROR: failed to ptrace(PEEKDATA) pid 21530: Input/output error (5) [01-Apr-2012 14:28:28] WARNING: [pool magento] child 21577, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.722625 sec), logging [01-Apr-2012 14:28:28] WARNING: [pool magento] child 21527, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.122326 sec), logging [01-Apr-2012 14:28:28] ERROR: failed to ptrace(PEEKDATA) pid 21527: Input/output error (5) [01-Apr-2012 14:28:28] ERROR: failed to ptrace(PEEKDATA) pid 21577: Input/output error (5) [01-Apr-2012 14:28:48] WARNING: [pool magento] child 21629, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.446961 sec), logging [01-Apr-2012 14:28:48] ERROR: failed to ptrace(PEEKDATA) pid 21629: Input/output error (5) _____________________________________________ I also noticed that the server is not using much memory: Mem: 16777216k total, 1204040k used, 15573176k free My.conf settings: query_cache_size = 128M innodb_buffer_pool_size = 512M open-files-limit = 8192 table_cache=4096 I just noticed that someone changed my innodb_buffer_pool_size to 512M. Shouldn't this be set to 80% of available ram? So I have 16gb ram so it should be set at 12G; however, I set it at 10G. What do you think? I made that change and restart everything. Php-fpm is still spiking cpu. Here is just 1 php-fpm process: 23942 user 17 0 507m 99m 27m R 90.9%CPU 0.6 0:03.46 php-fpm I'm sure there may be more information you will need to help, so just let me know what you guys need to help me figure this out. Thank you.

    Read the article

  • ntpdate -d Server dropped Strata too high

    - by AndyM
    I cannot sync with a NTP source thats coming from an internal router/firewall. Anyone help ? ntppdate -d 192.168.92.82 6 Jun 11:57:30 ntpdate[5011]: ntpdate [email protected] Tue Feb 24 06:32:26 EST 2004 (1) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) receive(192.168.92.82) transmit(192.168.92.82) 192.168.92.82: Server dropped: strata too high server 192.168.92.82, port 123 stratum 16, precision -19, leap 11, trust 000 refid [73.78.73.84], delay 0.02591, dispersion 0.00002 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 6:28:16.000 originate timestamp: d1972e03.0ae02645 Mon, Jun 6 2011 11:44:19.042 transmit timestamp: d197311b.0ffac1d2 Mon, Jun 6 2011 11:57:31.062 filter delay: 0.02609 0.02591 0.02594 0.02596 0.00000 0.00000 0.00000 0.00000 filter offset: -792.020 -792.020 -792.020 -792.020 0.000000 0.000000 0.000000 0.000000 delay 0.02591, dispersion 0.00002 offset -792.020152 6 Jun 11:57:31 ntpdate[5011]: no server suitable for synchronization found Edit The server I'm being asked to sync to is a firewall , and I've now been told that it is not syncing with anything. So I suppose I need to know if I can force my server to sync with a server that is stratum 16 i.e not sync'd. Is that possible ?

    Read the article

  • Outlook 2007 / 2010 Calendar: hide meetings in specific category

    - by Jeroen
    Question Is there any easy way in Outlook 2007/2010 to show/hide meetings in a specific category? Preferably only for a specific view (the Month view, in this case). Note: I was almost done writing this question, adding just one more "What I've tried" option, when I found an acceptable (though imperfect) solution. Remembering this SE blog post I figured I might as well post it after all and answer it myself. And who knows, perhaps someone else has a more elegant solution. The reason for me personally is that I'd like to hide the "small, recurring meetings" like our daily stand-up meeting in the month view. I'd prefer an Outlook feature that is meant for this (there must be one for this, right?), but I'm open to workarounds or plugin suggestions as well. What I expected to find somewhere was a list of categories (with added option "No category") where you could select/deselect from which categories you'd see meetings. Something like this mock-up: What I've tried Edit "View Settings", and use a "Filter..." on categories. This has several disadvantages, the major one is that the filter only allows me to choose what I want to show, but not what I want to hide. Even if I tick all categories but one for the filter it would still hide any uncategorized meeting. Similar to 1, but then using Advanced filters. Still a bit clumsy as changing views can be up to three clicks, but this is the best solution so far (see the corresponding answer below). Creating a sub-calendar for these "small" meetings that I wish to hide. This felt a bit clumsy and like overkill, but did provide an easy "select/deselect" option to show/hide these meetings. Search for plug-ins that do this. Couldn't find one (yet).

    Read the article

  • How do I keep a table up to date across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Kunagi LDAP configuration problems

    - by Willem de Vries
    We recently started with Scrum at our company and we wanted to start using Kunagi to test and see how it works. So I installed the kunagi_0.23.2.deb packet that I downloaded from their website, on my Ubuntu 11.04 running in tomcat6 using openjdk-6-jre. everything works fine except I can't get the LDAP to work. I have one AD server and one LDAP at my disposal for testing. For the LDAP I use the following info: -uri: ldap://192.168.1.11:389 -user: some_tested_user -passwd: the_pass -DN: dc=colosa,dc=net -LDAP Filter: (&(objectClass=user)) I tested various LDAP Filters, I don't know if I have the right one. However I get an erro when clicking "test LDAP". The error refers to the DN: Server service call error Calling service TestLdap failed. java.lang.RuntimeException: InvalidNameException: [LDAP: error code 34 - invalid DN] With the AD server I get no error while testing, yet I am not able to login I get: "Login faild" every time. I don't know if this is because of the LDAP Filter I entered, yet I can't get it to work. I have read this http://kunagi.org/iss652.html stating that I need to create my accounts inside Kunagi before I can login. So I did this with no effect. So basically my question is, what causes this DN string error (I am sure mine is right), and what LDAP Filter should i use? Any help would be highly appreciated.

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • How do I keep a table in sync across multiple SQL Databases?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app." Go figure!

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • How are spam e-mails filtered ?

    - by kevindqc
    Hello. I'm just wondering how some e-mails get past the spam filter, and some don't? Everyday I get World of Warcraft phishing emails that get past the filter... For example, here's a phishing email (just the header) I got in my inbox, and not in my junk mail: X-Message-Delivery: Vj0xLjE7dXM9MDtsPTA7YT0wO0Q9MjtTQ0w9Ng== X-Message-Status: n:0 X-SID-PRA: [email protected] X-AUTH-Result: NONE X-Message-Info: M98loaK0Lo27IVRxloyPIZmAwUHKn18nx0o/idLdvGYjK48i19NuvFOnRFYGWE+HdIrNJpi1XaYx0gaAV13cgRnkWSzgHKG1 Received: from blizzard.com ([204.45.59.37]) by SNT0-MC3-F21.Snt0.hotmail.com with Microsoft SMTPSVC(6.0.3790.3959); Sat, 10 Apr 2010 06:38:24 -0700 Received: from hxeabjlh ([192.168.1.165]) (envelope-sender <[email protected]>) by 192.168.1.111 with ESMTP for <[email protected]>; Sat, 10 Apr 2010 08:43:24 -0500 Reply-To: <[email protected]> Sender: [email protected] Message-ID: <DE567AFB9E2F3DD985A2D9A8D12D2917@hxeabjlh> From: "[email protected]" <[email protected]> To: <[email protected]> Subject: World of Warcraft Account Password verification Date: Sat, 10 Apr 2010 21:38:10 +0800 MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_04EE_0137659E.1AA23350" X-Priority: 3 X-MSMail-Priority: Normal X-Mailer: Microsoft Outlook Express 6.00.2900.5512 X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.5512 Return-Path: [email protected] X-OriginalArrivalTime: 10 Apr 2010 13:38:24.0607 (UTC) FILETIME=[17F3A6F0:01CAD8B3] From what I understand, when you send an email with SMTP, you can specify any hostname in the "HELO" command. Here, the spammer specified "blizzard.com". And he sent his email through Hotmail using Outlook Express. I just don't understand how this gets past the spam filter? There's this SPF thing that seems to exist... but it doesn't seem to be used by blizzard? I'm on Windows, and if I use nslookup to look for the TXT records of blizzard.com and worldofwarcraft.com, I don't see a thing.... so blizzard is not using SPF? Why would that be?

    Read the article

  • Prevent outgoing traffic unless OpenVPN connection is active using pf.conf on Mac OS X

    - by Nick
    I've been able to deny all connections to external networks unless my OpenVPN connection is active using pf.conf. However, I lose Wi-Fi connectivity if the connection is broken by closing and opening the laptop lid or toggling Wi-Fi off and on again. I'm on Mac OS 10.8.1. I connect to the Web via Wi-Fi (from varying locations, including Internet cafés). The OpenVPN connection is set up with Viscosity. I have the following packet filter rules set up in /etc/pf.conf # Deny all packets unless they pass through the OpenVPN connection wifi=en1 vpn=tun0 block all set skip on lo pass on $wifi proto udp to [OpenVPN server IP address] port 443 pass on $vpn I start the packet filter service with sudo pfctl -e and load the new rules with sudo pfctl -f /etc/pf.conf. I have also edited /System/Library/LaunchDaemons/com.apple.pfctl.plist and changed the line <string>-f</string> to read <string>-ef</string> so that the packet filter launches at system startup. This all seems to works great at first: applications can only connect to the web if the OpenVPN connection is active, so I'm never leaking data over an insecure connection. But, if I close and reopen my laptop lid or turn Wi-Fi off and on again, the Wi-Fi connection is lost, and I see an exclamation mark in the Wi-Fi icon in the status bar. Clicking the Wi-Fi icon shows an "Alert: No Internet connection" message: To regain the connection, I have to disconnect and reconnect Wi-Fi, sometimes five or six times, before the "Alert: No Internet connection" message disappears and I'm able to open the VPN connection again. Other times, the Wi-Fi alert disappears of its own accord, the exclamation mark clears, and I'm able to connect again. Either way, it can take five minutes or more to get a connection again, which can be frustrating. Why does Wi-Fi report "No internet connection" after losing connectivity, and how can I diagnose this issue and fix it?

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • fail2ban regex working but no action being taken

    - by fpghost
    I have the following snippet of fail2ban configuration on Ubuntu 13.10 server: #jail.conf [apache-getphp] enabled = true port = http,https filter = apache-getphp action = iptables-multiport[name=apache-getphp, port="http,https", protocol=tcp] mail-whois[name=apache-getphp, dest=root] logpath = /srv/apache/log/access.log maxretry = 1 #filter.d/apache-getphp.conf [Definition] failregex = ^<HOST> - - (?:\[[^]]*\] )+\"(GET|POST) /(?i)(PMA|phptest|phpmyadmin|myadmin|mysql|mysqladmin|sqladmin|mypma|admin|xampp|mysqldb|mydb|db|pmadb|phpmyadmin1|phpmyadmin2|cgi-bin) ignoreregex = I know the regex is good, because if I run the test command on my access.log: fail2ban-regex /srv/apache/log/access.log /etc/fail2ban/filter.d/apache-getphp.conf I get a SUCCESS result with multiple hits, and in my log I see entries like 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpTest/zologize/axa.php HTTP/1.1" 301 585 "-" "-" 187.192.89.147 - - [13/Apr/2014:11:36:03 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 301 593 "-" "-" Secondly I know email is configured correctly, as each time I service fail2ban restart I get an email for each of the filters stopping/starting. However despite all this no action seems to be taken when one of these requests comes in. No email with whois, and no entries in iptables. What possibly could be preventing fail2ban from taking action? (everything looks in order in fail2ban-client -d and I can see the chains have loaded with iptables -L)

    Read the article

  • Using my own Postfix, filtering spam and getting all the mail into my ISP's inbox

    - by djechelon
    Hello, I currently own a domain bought via GoDaddy.com, which provides me a basic email setup for the most common needs. I configured it to forward all mail to [email protected] to [email protected]. I also own a virtual server with a running Postfix that I use for a specific website (all mail to somedomain.com gets forwarded via LMTP to a program written by me). Since I'm recently experiencing some harassing by spammers, since GoDaddy doesn't seem to filter spam, and since my Windows Phone's Pocket Outlook cannot filter spam, I would like to use SpamAssassin to filter inbound spam by changing my domain's MX records to my server My ideal setup is the following: All mail delivered to somedomain.com gets redirected via LMTP as usual via virtual transport without any spam check All mail to [email protected] gets redirected to [email protected] after a severe spam check I don't care about [email protected] since I use just one address for now I would like to train SpamAssassin with customized spam rules, possibly based on the presence of certain keywords (links to certain unsubscribe pages I found recurring) I currently configured Postfix with transport somedomain.com lmtp:[127.0.0.1]:8025 .somedomain.com error: Cannot accept mail for this domain relay somedomain.com OK (I guess I should add mydomain.com OK too) virtual @mydomain.com [email protected] (looks like a catch-all rule, it's OK as requirement 3) I installed SpamAssassin, I can do rcspamd start and set it to boot with the server, but I don't know if there is anything else to do for use in Postfix, and how to apply requirement 1 (only mail to mydomain.com gets filtered) I also tried to send an email via Telnet to make sure my settings are ready for MX change. I received the message into my account but I found that it gone through secureserver.net, like Postfix didn't rewrite the destination but simply relayed the message. Thank you in advance. I'm no expert in SpamAssassin, and I have little experience in Postfix (enough to avoid making my server an open relay)

    Read the article

  • Problems configuring logstash for email output

    - by user2099762
    I'm trying to configure logstash to send email alerts and log output in elasticsearch / kibana. I have the logs successfully syncing via rsyslog, but I get the following error when I run /opt/logstash-1.4.1/bin/logstash agent -f /opt/logstash-1.4.1/logstash.conf --configtest Error: Expected one of #, {, ,, ] at line 23, column 12 (byte 387) after filter { if [program] == "nginx-access" { grok { match = [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} [%{HTTPDATE:time_local}] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded = false host = " Here is my logstash config file input { syslog { type => syslog port => 5544 } } filter { if [program] == "nginx-access" { grok { match => [ "message" , "%{IPORHOST:remote_addr} - %{USERNAME:remote_user} \[% {HTTPDATE:time_local}\] %{QS:request} %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}” ] } } } output { stdout { } elasticsearch { embedded => false host => "localhost" cluster => "cluster01" } email { from => "[email protected]" match => [ "Error 504 Gateway Timeout", "status,504", "Error 404 Not Found", "status,404" ] subject => "%{matchName}" to => "[email protected]" via => "smtp" body => "Here is the event line that occured: %{@message}" htmlbody => "<h2>%{matchName}</h2><br/><br/><h3>Full Event</h3><br/><br/><div align='center'>%{@message}</div>" } } I've checked line 23 which is referenced in the error and it looks fine....I've tried taking out the filter, and everything works...without changing that line. Please help

    Read the article

  • ffmpeg cutting video duration

    - by Steve Spence
    When using ffmpeg on linux, my 4.3GB 2.21 second video is being chopped down to 1.56 duration. I'm trying to reduce file size, but not lose frames. steve@steve-OptiPlex-170L:~/Desktop$ ffmpeg -i microbe.avi microbe.mp4 ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:37:58 with gcc 4.6.3 * THIS PROGRAM IS DEPRECATED * This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, avi, from 'microbe.avi': Duration: 00:02:21.80, start: 0.000000, bitrate: 242311 kb/s Stream #0.0: Video: rawvideo, bgr24, 1280x960, 10 tbr, 10 tbn, 10 tbc Incompatible pixel format 'bgr24' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x9f861e0] w:1280 h:960 pixfmt:bgr24 [avsink @ 0x9f86440] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x9f7d800] w:1280 h:960 fmt:bgr24 - w:1280 h:960 fmt:yuv420p flags:0x4 Output #0, mp4, to 'microbe.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 10 tbn, 10 tbc Stream mapping: Stream #0.0 - #0.0 Press ctrl-c to stop encoding frame= 1164 fps= 6 q=31.0 Lsize= 3775kB time=116.40 bitrate= 265.7kbits/s video:3765kB audio:0kB global headers:0kB muxing overhead 0.272870% steve@steve-OptiPlex-170L:~/Desktop$

    Read the article

  • How do I keep a table in Sync across 4 db's to be used in SQL Replication Filtering?

    - by Refracted Paladin
    I have a Win Form, Data Entry, application that uses 4 seperate Data Bases. This is an occasionally connected app that uses Merge Replication (SQL 2005) to stay in Sync. This is working just fine. The next hurdle I am trying to tackle is adding Filters to my Publications. Right now we are replicating 70mbs, compressed, to each of our 150 subscribers when, truthfully, they only need a tiny fraction of that. Using Filters I am able to accomplish this(see code below) but I had to make a mapping table in order to do so. This mapping table consists of 3 columns. A PrimaryID(Guid), WorkerName(varchar), and ClientID(int). The problem is I need this table present in all FOUR Databases in order to use it for the filter since, to my knowledge, views or cross-db query's are not allowed in a Filter Statement. What are my options? Seems like I would set it up to be maintained in 1 Database and then use Triggers to keep it updated in the other 3 Databases. In order to be a part of the Filter I have to include that table in the Replication Set so how do I flag it appropriately. Is there a better way, altogether? SELECT <published_columns> FROM [dbo].[tblPlan] WHERE [ClientID] IN (select ClientID from [dbo].[tblWorkerOwnership] where WorkerID = SUSER_SNAME()) Which allows you to chain together Filters, this next one is below the first one so it only pulls from the first's Filtered Set. SELECT <published_columns> FROM [dbo].[tblPlan] INNER JOIN [dbo].[tblHealthAssessmentReview] ON [tblPlan].[PlanID] = [tblHealthAssessmentReview].[PlanID] P.S. - I know how illogical the DB structure sounds. I didn't make it. I inherited it and was then told to make it a "disconnected app."

    Read the article

  • Parsing nested JSON objects with JSON Framework for Objective-C

    - by Sheehan Alam
    I have the following JSON object: { "response": { "status": 200 }, "messages": [ { "message": { "user": "value" "pass": "value", "url": "value" } ] } } I am using JSON-Framework (also tried JSON Touch) to parse through this and create a dictionary. I want to access the "message" block and pull out the "user", "pass" and "url" values. In Obj-C I have the following code: // Create new SBJSON parser object SBJSON *parser = [[SBJSON alloc] init]; // Prepare URL request to download statuses from Twitter NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:myURL]]; // Perform request and get JSON back as a NSData object NSData *response = [NSURLConnection sendSynchronousRequest:request returningResponse:nil error:nil]; // Get JSON as a NSString from NSData response NSString *json_string = [[NSString alloc] initWithData:response encoding:NSUTF8StringEncoding]; //Print contents of json-string NSArray *statuses = [parser objectWithString:json_string error:nil]; NSLog(@"Array Contents: %@", [statuses valueForKey:@"messages"]); NSLog(@"Array Count: %d", [statuses count]); NSDictionary *results = [json_string JSONValue]; NSArray *tweets = [[results objectForKey:@"messages"] objectForKey:@"message"]; for (NSDictionary *tweet in tweets) { NSString *url = [tweet objectForKey:@"url"]; NSLog(@"url is: %@",url); } I can pull out "messages" and see all of the "message" blocks, but I am unable to parse deeper and pull out the "user", "pass", and "url".

    Read the article

  • HttpWebRequest sessionID c# login

    - by warne
    Im trying to login to a website (www.vodafone.ie) with a console app and c# httpWebRequest. Problem is it works ok about 50% of the time. Im using fiddler to find out the GET and POST requests I need to make. Done that and my app is successfully recreating these as best as I can see. The steps are; 1) GET request with cookie container to login uri. server response sets new cookie called jsessionID 2) do POST request with login credentials and same cookie container containing previous jsessionID. Looking at the fiddler logs for successful POST request login (browser or my app) I see it sets a thing in the response header : "Set-cookie: supercookie=-; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=". What is this supercookie thing? Its not returned to me in the response cookie collection like the jsessionID. On rare occasions, there is along string of numbers with the supercookie instead of just "-". I made sure to clear all cookies before analyzing the request/response headers. If the super cookie thing is not being set in the reponse my login fails. So just wondering what's going on here? cheers!

    Read the article

  • python unhashable type - posting xml data

    - by eterry28
    First, I'm not a python programmer. I'm an old C dog that's learned new Java and PHP tricks, but python looks like a pretty cool language. I'm getting an error that I can't quite follow. The error follows the code below. import httplib, urllib url = "pdb-services-beta.nipr.com" xml = '<?xml version="1.0"?><!DOCTYPE SCB_Request SYSTEM "http://www.nipr.com/html/SCB_XML_Request.dtd"><SCB_Request Request_Type="Create_Report"><SCB_Login_Data CustomerID="someuser" Passwd="somepass" /><SCB_Create_Report_Request Title=""><Producer_List><NIPR_Num_List_XML><NIPR_Num NIPR_Num="8980608" /><NIPR_Num NIPR_Num="7597855" /><NIPR_Num NIPR_Num="10166016" /></NIPR_Num_List_XML></Producer_List></SCB_Create_Report_Request></SCB_Request>' params = {} params['xmldata'] = xml headers = {} headers['Content-type'] = 'text/xml' headers['Accept'] = '*/*' headers['Content-Length'] = "%d" % len(xml) connection = httplib.HTTPSConnection(url) connection.set_debuglevel(1) connection.request("POST", "/pdb-xml-reports/scb_xmlclient.cgi", params, headers) response = connection.getresponse() print response.status, response.reason data = response.read() print data connection.close Here's the error: Traceback (most recent call last): File "C:\Python27\tutorial.py", line 14, in connection.request("POST", "/pdb-xml-reports/scb_xmlclient.cgi", params, headers) File "C:\Python27\lib\httplib.py", line 958, in request self._send_request(method, url, body, headers) File "C:\Python27\lib\httplib.py", line 992, in _send_request self.endheaders(body) File "C:\Python27\lib\httplib.py", line 954, in endheaders self._send_output(message_body) File "C:\Python27\lib\httplib.py", line 818, in _send_output self.send(message_body) File "C:\Python27\lib\httplib.py", line 790, in send self.sock.sendall(data) File "C:\Python27\lib\ssl.py", line 229, in sendall v = self.send(data[count:]) TypeError: unhashable type My log file says that the xmldata parameter is empty. Any ideas?

    Read the article

  • Xdebug: remote debugging won't stop at breakpoints

    - by Bryan M.
    I'm having a problem with xdebug not stopping at breakpoints when using remote debugging (everything is fine when running scripts via the command line). It will break at the first line of the program, then exit, not catching any breakpoints. It used to work fine, until I switched over to using MacPorts for Apache and PHP. I've tried re-compiling it serveral times (with several versions), but no dice. I'm using PHP 5.3.1 and Xdebug 2.1.0-beta3 I've also tried at least 3 different debugging programs (MacGDBp, Netbeans and JetBrains Web IDE). My php.ini settings look like: [xdebug] xdebug.remote_enable=1 xdebug.remote_handler=dbgp xdebug.remote_mode=req xdebug.remote_port=9000 xdebug.remote_host=localhost xdebug.idekey=webide And when I log the debugger output, setting a breakpoint looks like this/; <- breakpoint_set -i 895 -t line -f file:///Users/WM_imac/Sites/wm/debug_test.php -n 13 -s enabled -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="breakpoint_set" transaction_id="895" state="enabled" id="890660002"></response> When run, the debugger will get the context of the first line of the application, then send the detach and stop messages. However, this line is output when starting the debugger. <- feature_get -i 885 -n breakpoint_types -> <response xmlns="urn:debugger_protocol_v1" xmlns:xdebug="http://xdebug.org/dbgp/xdebug" command="feature_get" transaction_id="885" feature_name="breakpoint_types" supported="1"><![CDATA[line conditional call return exception]]></response> Does 'line conditional call return exception' mean anything?

    Read the article

  • How to use Tor control protocol in C#?

    - by Ed
    I'm trying to send commands to the Tor control port programmatically to make it refresh the chain. I haven't been able to find any examples in C#, and my solution's not working. The request times out. I have the service running, and I can see it listening on the control port. public string Refresh() { TcpClient client = new TcpClient("localhost", 9051); string response = string.Empty; string authenticate = MakeTcpRequest("AUTHENTICATE", client); if (authenticate.Equals("250")) response = MakeTcpRequest("SIGNAL NEWNYM", client); client.Close(); return response; } public string MakeTcpRequest(string message, TcpClient client) { client.ReceiveTimeout = 20000; client.SendTimeout = 20000; string proxyResponse = string.Empty; try { // Send message StreamWriter streamWriter = new StreamWriter(client.GetStream()); streamWriter.Write(message); streamWriter.Flush(); // Read response StreamReader streamReader = new StreamReader(client.GetStream()); proxyResponse = streamReader.ReadToEnd(); } catch (Exception ex) { // Ignore } return proxyResponse; } Can anyone spot what I'm doing wrong?

    Read the article

  • c# WebRequest to connect to wikipedia API

    - by NickJ
    Hey, This may be a pathetically simple problem but I cannot seem to format the post webrequest/response to get data from the wikipedia api. I have posted my code below if anyone can help me see my problem. string pgTitle = txtPageTitle.Text; Uri address = new Uri("http://en.wikipedia.org/w/api.php"); HttpWebRequest request = WebRequest.Create(address) as HttpWebRequest; request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; string action = "query"; string query = pgTitle; StringBuilder data = new StringBuilder(); data.Append("action=" + HttpUtility.UrlEncode(action)); data.Append("&query=" + HttpUtility.UrlEncode(query)); byte[] byteData = UTF8Encoding.UTF8.GetBytes(data.ToString()); request.ContentLength = byteData.Length; using (Stream postStream = request.GetRequestStream()) { postStream.Write(byteData, 0, byteData.Length); } using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { // Get the response stream StreamReader reader = new StreamReader(response.GetResponseStream()); divWikiData.InnerText = reader.ReadToEnd(); }

    Read the article

  • Radio button formatting in IE8 (not displaying correctly)

    - by Kevin
    I'm having a problem with getting my radio buttons laid out (and checkboxes) correctly in IE8 .. Firefox, Chrome, Opera all working however .. Here is a screenshot of the problem The code is below: <label for="AdditionalResponses_0__Response" id="AdditionalResponses_0__Response_Label">Single answer</label> <div class="row " id="AdditionalResponses_0__Response"> <input id="AdditionalResponses_0__Response_one" name="AdditionalResponses[0].Response" type="radio" value="one" /> <label for="AdditionalResponses_0__Response_one" id="AdditionalResponses_0__Response_one_Label">one</label> <input id="AdditionalResponses_0__Response_two" name="AdditionalResponses[0].Response" type="radio" value="two" /> <label for="AdditionalResponses_0__Response_two" id="AdditionalResponses_0__Response_two_Label">two</label> <input id="AdditionalResponses_0__Response_three" name="AdditionalResponses[0].Response" type="radio" value="three" /> <label for="AdditionalResponses_0__Response_three" id="AdditionalResponses_0__Response_three_Label">three</label> <input id="AdditionalResponses_0__Response_four" name="AdditionalResponses[0].Response" type="radio" value="four" /> <label for="AdditionalResponses_0__Response_four" id="AdditionalResponses_0__Response_four_Label">four</label> </div> Sorry for the one long line, but that's how I got it through the source.. Here is the CSS: .row input (line 471) { float: left; display: inline; width: 16px; height: 16px; margin-top: 0pt; margin-right: 5px; margin-bottom: 0pt; margin-left: 0pt; } .row label (line 479) { float: none; font-weight: normal; font-size: 12px; line-height: 16px; } div.panes label (line 70) { font-size: 95%; font-weight: bold; color: #222222; line-height: 150%; padding-bottom: 3px; display: block; }

    Read the article

< Previous Page | 131 132 133 134 135 136 137 138 139 140 141 142  | Next Page >