Search Results

Search found 49170 results on 1967 pages for 'running objects'.

Page 188/1967 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Find out which object being added to NSMutableArray is nil

    - by Raphael Caixeta
    I started a project using ARC, and I'm inserting a few objects into an NSMutableArray. The objects have all started out as NSStrings, and when attempting to add these objects into the array, I get the following error: Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: ' -[__NSArrayM insertObject:atIndex:]: object cannot be nil This array is holding several objects. Is there a quick way for me to find which of the objects I'm attempting to put into the array is nil?

    Read the article

  • HP ProLiant DL380 G3 Running Windows Server 2000 has crashed between 6-7:30am for the past 5 days

    - by user109717
    I have a HP ProLiant DL380 G3 running Windows Server 2000 that has been crashing everyday between 6-730am. This started when I changed out a failing hard drive 6 days ago. I have looked at the scheduled tasks which does not have anything pertaining to this issue. Below are the only things I see in the system log and some of the dump files. Can this be a hardware issue if this happens at a certain time frame everyday? Any help is greatly appreciated. Thanks The previous system shutdown at 6:07:55 AM on 2/7/2012 was unexpected. System Information Agent: Health: The server is operational again. The server has previously been shutdown by the Automatic Server Recovery (ASR) feature and has just become operational again. [SNMP TRAP: 6025 in CPQHLTH.MIB] BugCheck 7A, {3, c0000005, 3400028, 0} Probably caused by : memory_corruption ( nt!MiMakeSystemAddressValidPfn+42 ) Followup: MachineOwner 0: kd !analyze -v * Bugcheck Analysis * * KERNEL_DATA_INPAGE_ERROR (7a) The requested page of kernel data could not be read in. Typically caused by a bad block in the paging file or disk controller error. Also see KERNEL_STACK_INPAGE_ERROR. If the error status is 0xC000000E, 0xC000009C, 0xC000009D or 0xC0000185, it means the disk subsystem has experienced a failure. If the error status is 0xC000009A, then it means the request failed because a filesystem failed to make forward progress. Arguments: Arg1: 00000003, lock type that was held (value 1,2,3, or PTE address) Arg2: c0000005, error status (normally i/o status code) Arg3: 03400028, current process (virtual address for lock type 3, or PTE) Arg4: 00000000, virtual address that could not be in-paged (or PTE contents if arg1 is a PTE address) MODULE_NAME: nt IMAGE_NAME: memory_corruption BugCheck A, {0, 2, 1, 804137d6} Probably caused by : ntkrnlmp.exe ( nt!CcGetVirtualAddress+ba ) * Bugcheck Analysis * * IRQL_NOT_LESS_OR_EQUAL (a) An attempt was made to access a pageable (or completely invalid) address at an interrupt request level (IRQL) that is too high. This is usually caused by drivers using improper addresses. If a kernel debugger is available get the stack backtrace. Arguments: Arg1: 00000000, memory referenced Arg2: 00000002, IRQL Arg3: 00000001, bitfield : bit 0 : value 0 = read operation, 1 = write operation bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status) Arg4: 804137d6, address which referenced memory MODULE_NAME: nt IMAGE_NAME: ntkrnlmp.exe

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Why the cucumber features keeps running even though it fails?

    - by Millisami
    Its a rails 2.3.5 app. I'm using the rspec and cucumber for testing. When I run autospec, it runs correctly with the warning (Not running features. To run features in autotest, set AUTOFEATURE=true.) as below: [~/rails_apps/automation (campaign)?] ? autospec (Not running features. To run features in autotest, set AUTOFEATURE=true.) (Not running features. To run features in autotest, set AUTOFEATURE=true.) loading autotest/rails_rspec /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts Now, as it suggests, when I run AUTOFEATURE=true autospec, the specs runs and the cuke features as well. But the problem is that it won't stop. It runs the features and runs them again and again in a loop. It doesn't stop after it fails. Is this due to the warning Warning: $KCODE is NONE as shown below?? [~/rails_apps/automation (campaign)?] ? AUTOFEATURE=true autospec loading autotest/cucumber_rails_rspec Warning: $KCODE is NONE. /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/treetop-1.4.5/lib/treetop/ruby_extensions/string.rb:31: warning: method redefined; discarding old indent /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/lib/ruby/1.8/pathname.rb:263: warning: `*' interpreted as argument prefix /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/activesupport-2.3.5/lib/active_support/core_ext/object/blank.rb:49: warning: method redefined; discarding old blank? /home/millisami/.rvm/rubies/ree-1.8.7-2010.01/bin/ruby /home/millisami/.rvm/gems/ree-1.8.7-2010.01/gems/rspec-1.3.0/bin/spec --autospec /home/millisami/rails_apps/automation/spec/controllers/campaigns_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/board_spec.rb /home/millisami/rails_apps/automation/spec/models/user_spec.rb /home/millisami/rails_apps/automation/spec/models/campaign_spec.rb /home/millisami/rails_apps/automation/spec/controllers/outlets_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/boards_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_type_spec.rb /home/millisami/rails_apps/automation/spec/models/vendor_spec.rb /home/millisami/rails_apps/automation/spec/controllers/brands_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/vendors_controller_spec.rb /home/millisami/rails_apps/automation/spec/controllers/dashboard_controller_spec.rb /home/millisami/rails_apps/automation/spec/models/brand_spec.rb /home/millisami/rails_apps/automation/spec/helpers/dashboard_helper_spec.rb /home/millisami/rails_apps/automation/spec/models/outlet_spec.rb /home/millisami/rails_apps/automation/spec/models/client_spec.rb /home/millisami/rails_apps/automation/spec/controllers/clients_controller_spec.rb -O spec/spec.opts

    Read the article

  • How to get objects to react to touches in Cocos2D?

    - by Wayfarer
    Alright, so I'm starting to learn more about Coco2D, but I'm kinda frusterated. A lot of the tutorials I have found are for outdated versions of the code, so when I look through and see how they do certain things, I can't translate it into my own program, because a lot has changed. With that being said, I am working in the latest version of Coco2d, version 0.99. What I want to do is create a sprite on the screen (Done) and then when I touch that sprite, I can have "something" happen. For now, let's just make an alert go off. Now, I got this code working with the help of a friend. Here is the header file: // When you import this file, you import all the cocos2d classes #import "cocos2d.h" // HelloWorld Layer @interface HelloWorld : CCLayer { CGRect spRect; } // returns a Scene that contains the HelloWorld as the only child +(id) scene; @end And here is the implementation file: // // cocos2d Hello World example // http://www.cocos2d-iphone.org // // Import the interfaces #import "HelloWorldScene.h" #import "CustomCCNode.h" // HelloWorld implementation @implementation HelloWorld +(id) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorld *layer = [HelloWorld node]; // add layer as a child to scene [scene addChild: layer]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init] )) { // create and initialize a Label CCLabel* label = [CCLabel labelWithString:@"Hello World" fontName:@"Times New Roman" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; CCSprite *sp = [CCSprite spriteWithFile:@"test2.png"]; sp.position = ccp(300,200); [self addChild:sp]; float w = [sp contentSize].width; float h = [sp contentSize].height; CGPoint aPoint = CGPointMake([sp position].x - (w/2), [sp position].y - (h/2)); spRect = CGRectMake(aPoint.x, aPoint.y, w, h); CCSprite *sprite2 = [CCSprite spriteWithFile:@"test3.png"]; sprite2.position = ccp(100,100); [self addChild:sprite2]; //[self registerWithTouchDispatcher]; self.isTouchEnabled = YES; } return self; } // on "dealloc" you need to release all your retained objects - (void) dealloc { // in case you have something to dealloc, do it in this method // in this particular example nothing needs to be released. // cocos2d will automatically release all the children (Label) // don't forget to call "super dealloc" [super dealloc]; } - (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; //CGPoint location = [[CCDirector sharedDirector] convertCoordinate:[touch locationInView:touch.view]]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(spRect, location)) { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Win" message:@"testing" delegate:nil cancelButtonTitle:@"okay" otherButtonTitles:nil]; [alert show]; [alert release]; NSLog(@"TOUCHES"); } NSLog(@"Touch got"); } However, this only works for 1 object, the sprite which I create the CGRect for. I can't do it for 2 sprites, which I was testing. So my question is this: How can I have all sprites on the screen react to the same event when touched? For my program, the same event needs to be run for all objects of the same type, so that should make it a tad easier. I tried making a subclass of CCNode and over write the method, but that just didn't work at all... so I'm doing something wrong. Help would be appreciated!

    Read the article

  • RAILS :"session contains objects whose class definition isn\'t available."

    - by Surya
    Session contains objects whose class definition isn\'t available. Remember to require the classes for all objects kept in the session I am trying to integrate http://github.com/binarylogic/authlogic for authentication into my rails application . I follwed all the steps into mentioned in the documentation . Now i seem to be getting this error when i hit a controller . Looks like i am missing something obvious . stacktrace /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:77:in `stale_session_check!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:61:in `load!' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/session/abstract_store.rb:28:in `[]' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:48:in `session_credentials' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/session.rb:33:in `persist_by_session' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:93:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `each' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:92:in `run' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:276:in `run_callbacks' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/callbacks.rb:79:in `persist' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:55:in `persisting?' /Library/Ruby/Gems/1.8/gems/authlogic-2.1.3/lib/authlogic/session/persistence.rb:39:in `find' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:12:in `current_user_session' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:17:in `current_user' /Users/suryagaddipati/myprojects/groceryplanner/app/controllers/application_controller.rb:30:in `require_no_user' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `send' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:178:in `evaluate_method' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/callbacks.rb:166:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:225:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:629:in `run_before_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:615:in `call_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:610:in `perform_action_without_benchmark' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:10:in `realtime' /Library/Ruby/Gems/1.8/gems/activesupport-2.3.5/lib/active_support/core_ext/benchmark.rb:17:in `ms' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/rescue.rb:160:in `perform_action_without_flash' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/flash.rb:146:in `perform_action' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `send' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:532:in `process_without_filters' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/filters.rb:606:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:391:in `process' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/base.rb:386:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/routing/route_set.rb:437:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:87:in `dispatch' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:121:in `_call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/dispatcher.rb:130:in `build_middleware_stack' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:29:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/query_cache.rb:34:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:9:in `cache' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/query_cache.rb:28:in `call' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:361:in `call' /Library/Ruby/Gems/1.8/gems/actionpack-2.3.5/lib/action_controller/string_coercion.rb:25:in `call' /Users/suryagaddipati/.gem/ruby/1.8/gems/rack-1.0.1/lib/rack/head.rb:9:in `call'

    Read the article

  • How to use objects as modules/functors in Scala?

    - by Jeff
    Hi. I want to use object instances as modules/functors, more or less as shown below: abstract class Lattice[E] extends Set[E] { val minimum: E val maximum: E def meet(x: E, y: E): E def join(x: E, y: E): E def neg(x: E): E } class Calculus[E](val lat: Lattice[E]) { abstract class Expr case class Var(name: String) extends Expr {...} case class Val(value: E) extends Expr {...} case class Neg(e1: Expr) extends Expr {...} case class Cnj(e1: Expr, e2: Expr) extends Expr {...} case class Dsj(e1: Expr, e2: Expr) extends Expr {...} } So that I can create a different calculus instance for each lattice (the operations I will perform need the information of which are the maximum and minimum values of the lattice). I want to be able to mix expressions of the same calculus but not be allowed to mix expressions of different ones. So far, so good. I can create my calculus instances, but problem is that I can not write functions in other classes that manipulate them. For example, I am trying to create a parser to read expressions from a file and return them; I also was trying to write an random expression generator to use in my tests with ScalaCheck. Turns out that every time a function generates an Expr object I can't use it outside the function. Even if I create the Calculus instance and pass it as an argument to the function that will in turn generate the Expr objects, the return of the function is not recognized as being of the same type of the objects created outside the function. Maybe my english is not clear enough, let me try a toy example of what I would like to do (not the real ScalaCheck generator, but close enough). def genRndExpr[E](c: Calculus[E], level: Int): Calculus[E]#Expr = { if (level > MAX_LEVEL) { val select = util.Random.nextInt(2) select match { case 0 => genRndVar(c) case 1 => genRndVal(c) } } else { val select = util.Random.nextInt(3) select match { case 0 => new c.Neg(genRndExpr(c, level+1)) case 1 => new c.Dsj(genRndExpr(c, level+1), genRndExpr(c, level+1)) case 2 => new c.Cnj(genRndExpr(c, level+1), genRndExpr(c, level+1)) } } } Now, if I try to compile the above code I get lots of error: type mismatch; found : plg.mvfml.Calculus[E]#Expr required: c.Expr case 0 = new c.Neg(genRndExpr(c, level+1)) And the same happens if I try to do something like: val boolCalc = new Calculus(Bool) val e1: boolCalc.Expr = genRndExpr(boolCalc) Please note that the generator itself is not of concern, but I will need to do similar things (i.e. create and manipulate calculus instance expressions) a lot on the rest of the system. Am I doing something wrong? Is it possible to do what I want to do? Help on this matter is highly needed and appreciated. Thanks a lot in advance. After receiving an answer from Apocalisp and trying it. Thanks a lot for the answer, but there are still some issues. The proposed solution was to change the signature of the function to: def genRndExpr[E, C <: Calculus[E]](c: C, level: Int): C#Expr I changed the signature for all the functions involved: getRndExpr, getRndVal and getRndVar. And I got the same error message everywhere I call these functions and got the following error message: error: inferred type arguments [Nothing,C] do not conform to method genRndVar's type parameter bounds [E,C genRndVar(c) Since the compiler seemed to be unable to figure out the right types I changed all function call to be like below: case 0 => new c.Neg(genRndExpr[E,C](c, level+1)) After this, on the first 2 function calls (genRndVal and genRndVar) there were no compiling error, but on the following 3 calls (recursive calls to genRndExpr), where the return of the function is used to build a new Expr object I got the following error: error: type mismatch; found : C#Expr required: c.Expr case 0 = new c.Neg(genRndExpr[E,C](c, level+1)) So, again, I'm stuck. Any help will be appreciated.

    Read the article

  • How to scale rotated objects properly in Actionscript 3?

    - by Tom
    This is unfortunately a quite complex issue to explain, so please don't get discouraged by the wall of text - it's there for a reason. ;) I'm working on a transformation manager for flash, written with Actionscript 3. Users can place objects on the screen, for example a rectangle. This rectangle can then be selected and transformed: move, scale or rotate. Because flash by default rotates around the top left point of the object, and I want it to rotate around the center, I created a wrapper setup for each display object (eg. a rectangle). This is how the wrappers are setup: //the position wrapper makes sure that we do get the top left position when we access x and y var positionWrapper:Sprite = new Sprite(); positionWrapper.x = renderObject.x; positionWrapper.y = renderObject.y; //set the render objects location to center at the rotation wrappers top left renderObject.x = 0 - renderObject.width / 2; renderObject.y = 0 - renderObject.height / 2; //now create a rotation wrapper, at the center of the display object var rotationWrapper:Sprite = new Sprite(); rotationWrapper.x = renderObject.width / 2; rotationWrapper.y = renderObject.height / 2; //put the rotation wrapper inside the position wrapper and the render object inside the rotation wrapper positionWrapper.addChild(rotationWrapper); rotationWrapper.addChild(renderObject); Now, the x and y of the object can be accessed and set directly: mainWrapper.x or mainWrapper.y. The rotation can be set and accessed from the child of this main wrapper: mainWrapper.getChildAt(0).rotation. Finally, the width and height of the display object can be retreived and set by getting the child of the rotation wrapper and accessing the display object directly. An example on how I access them: //get wrappers and render object var positionWrapper:Sprite = currentSelection["render"]; var rotationWrapper:Sprite = positionWrapper.getChildAt(0) as Sprite; var renderObject:DisplayObject = rotationWrapper.getChildAt(0); This works perfectly for all initial transformations: moving, scaling and rotating. However, the problem arises when you first rotate an object (eg. 45 degrees) and then scale it. The scaled object is getting out of shape and doesn't scale as it should. This for example happens when you scale to the left. Scaling left is basically adding n width to the object and then reduce the x coord of the position wrapper by n too: renderObject.width -= diffX; positionWrapper.x += diffX; This works when the object is not rotated. However, when it is, the position wrapper won't be rotated as it is a parent of the rotation wrapper. This will make the position wrapper move left horizontally while the width of the object is increased diagonally. I hope this makes any sense, if not, please tell me and I'll try to elaborate more. Now, to the question: should I use a different kind of setup, system or structure? Should I maybe use matrixes, if so, how would you keep a static width/height after rotation? Or how do I fix my current wrapper system for scaling after rotation? Any help is appreciated.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Heartbeat won't successfully start up resources from a cold boot when a failed node is present

    - by Matthew
    I currently have two ubuntu servers running Heartbeat and DRBD. The servers are directory connected with a 1000Mbps crossover cable on eth1 and have access to an IP camera LAN on eth0. Now, let's say that one node is down and the remaining functional node is booting after having been shut down. The node that is still functioning won't start up heartbeat and provide access to the drbd resource from a cold boot. I have to manually restart heartbeat by sudo service heartbeat restart to get everything up and running. How can I get it to start fine from a cold start, when only one server is present? Here is the ha.cf: debug /var/log/ha-debug logfile /var/log/ha-log logfacility none keepalive 2 deadtime 10 warntime 7 initdead 60 ucast eth1 192.168.2.2 ucast eth0 10.1.10.201 node EMserver1 node EMserver2 respawn hacluster /usr/lib/heartbeat/ipfail ping 10.1.10.22 10.1.10.21 10.1.10.11 auto_failback off Some material from the syslog: harc[4604]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/status status mach_down[4632]: 2012/11/27_13:54:49 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquired mach_down[4632]: 2012/11/27_13:54:49 info: mach_down takeover complete for node emserver2. Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: Initial resource acquisition complete (T_RESOURCES(us)) Nov 27 13:54:49 EMserver1 heartbeat: [4586]: info: mach_down takeover complete. IPaddr[4679]: 2012/11/27_13:54:49 INFO: Resource is stopped Nov 27 13:54:49 EMserver1 heartbeat: [4605]: info: Local Resource acquisition completed. harc[4713]: 2012/11/27_13:54:49 info: Running /etc/ha.d//rc.d/ip-request-resp ip-request-resp ip-request-resp[4713]: 2012/11/27_13:54:49 received ip-request-resp IPaddr::10.1.10.254 OK yes ResourceManager[4732]: 2012/11/27_13:54:50 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[4759]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 start IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated nic for 10.1.10.254: eth0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: Using calculated netmask for 10.1.10.254: 255.255.255.0 IPaddr[4816]: 2012/11/27_13:54:50 INFO: eval ifconfig eth0:0 10.1.10.254 netmask 255.255.255.0 broadcast 10.1.10.255 IPaddr[4804]: 2012/11/27_13:54:50 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/drbddisk r0 start Filesystem[4965]: 2012/11/27_13:54:50 INFO: Resource is stopped ResourceManager[4732]: 2012/11/27_13:54:50 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 start Filesystem[5039]: 2012/11/27_13:54:50 INFO: Running start for /dev/drbd1 on /shr Filesystem[5033]: 2012/11/27_13:54:51 INFO: Success ResourceManager[4732]: 2012/11/27_13:54:51 info: Running /etc/init.d/nfs-kernel-server start Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: Local Resource acquisition completed. (none) Nov 27 13:55:00 EMserver1 heartbeat: [4586]: info: local resource transition completed. Nov 27 13:57:46 EMserver1 heartbeat: [4586]: info: Heartbeat shutdown in progress. (4586) Nov 27 13:57:46 EMserver1 heartbeat: [5286]: info: Giving up all HA resources. ResourceManager[5301]: 2012/11/27_13:57:46 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[5301]: 2012/11/27_13:57:46 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[5372]: 2012/11/27_13:57:46 INFO: Running stop for /dev/drbd1 on /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: Trying to unmount /shr Filesystem[5372]: 2012/11/27_13:57:47 INFO: unmounted /shr successfully Filesystem[5366]: 2012/11/27_13:57:47 INFO: Success ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[5301]: 2012/11/27_13:57:47 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[5509]: 2012/11/27_13:57:47 INFO: ifconfig eth0:0 down IPaddr[5497]: 2012/11/27_13:57:47 INFO: Success Nov 27 13:57:47 EMserver1 heartbeat: [5286]: info: All HA resources relinquished. Nov 27 13:57:48 EMserver1 heartbeat: [4586]: info: killing /usr/lib/heartbeat/ipfail process group 4603 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBFIFO process 4589 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4590 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4591 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4592 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4593 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4594 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4595 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4596 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4597 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBWRITE process 4598 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: killing HBREAD process 4599 with signal 15 Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4589 exited. 11 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4596 exited. 10 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4598 exited. 9 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4590 exited. 8 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4595 exited. 7 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4591 exited. 6 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4592 exited. 5 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4593 exited. 4 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4597 exited. 3 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4594 exited. 2 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: Core process 4599 exited. 1 remaining Nov 27 13:57:49 EMserver1 heartbeat: [4586]: info: emserver1 Heartbeat shutdown complete. Here is some more from the log ResourceManager[2576]: 2012/11/28_16:32:42 info: Acquiring resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server IPaddr[2602]: 2012/11/28_16:32:42 INFO: Running OK Filesystem[2653]: 2012/11/28_16:32:43 INFO: Running OK Nov 28 16:32:52 EMserver1 heartbeat: [1695]: WARN: node emserver2: is dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Dead node emserver2 gave up resources. Nov 28 16:32:52 EMserver1 ipfail: [1807]: info: Status update: Node emserver2 now has status dead Nov 28 16:32:52 EMserver1 heartbeat: [1695]: info: Link emserver2:eth1 dead. Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: NS: We are still alive! Nov 28 16:32:53 EMserver1 ipfail: [1807]: info: Link Status update: Link emserver2/eth1 now has status dead Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Asking other side for ping node count. Nov 28 16:32:55 EMserver1 ipfail: [1807]: info: Checking remote count of ping nodes. Nov 28 16:32:57 EMserver1 heartbeat: [1695]: info: Heartbeat shutdown in progress. (1695) Nov 28 16:32:57 EMserver1 heartbeat: [2734]: info: Giving up all HA resources. ResourceManager[2751]: 2012/11/28_16:32:57 info: Releasing resource group: emserver1 IPaddr::10.1.10.254 drbddisk::r0 Filesystem::/dev/drbd1::/shr::ext4 nfs-kernel-server ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/init.d/nfs-kernel-server stop ResourceManager[2751]: 2012/11/28_16:32:57 info: Running /etc/ha.d/resource.d/Filesystem /dev/drbd1 /shr ext4 stop Filesystem[2829]: 2012/11/28_16:32:57 INFO: Running stop for /dev/drbd1 on /shr Filesystem[2829]: 2012/11/28_16:32:57 INFO: Trying to unmount /shr Filesystem[2829]: 2012/11/28_16:32:58 INFO: unmounted /shr successfully Filesystem[2823]: 2012/11/28_16:32:58 INFO: Success ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/drbddisk r0 stop ResourceManager[2751]: 2012/11/28_16:32:58 info: Running /etc/ha.d/resource.d/IPaddr 10.1.10.254 stop IPaddr[2971]: 2012/11/28_16:32:58 INFO: ifconfig eth0:0 down IPaddr[2958]: 2012/11/28_16:32:58 INFO: Success Nov 28 16:32:58 EMserver1 heartbeat: [2734]: info: All HA resources relinquished. Nov 28 16:32:59 EMserver1 heartbeat: [1695]: info: killing /usr/lib/heartbeat/ipfail process group 1807 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBFIFO process 1777 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1778 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1779 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1780 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1781 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1782 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1783 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1784 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1785 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBWRITE process 1786 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: killing HBREAD process 1787 with signal 15 Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1778 exited. 11 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1779 exited. 10 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1780 exited. 9 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1781 exited. 8 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1782 exited. 7 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1783 exited. 6 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1784 exited. 5 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1785 exited. 4 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1786 exited. 3 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1787 exited. 2 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: Core process 1777 exited. 1 remaining Nov 28 16:33:01 EMserver1 heartbeat: [1695]: info: emserver1 Heartbeat shutdown complete. If I restarted heartbeat at this point... the resources heartbeat controls would start up fine.... please help!

    Read the article

  • Why this strange behavior of sqlbulkcopy in a asp.net website running under iis?

    - by Pandiya Chendur
    I'm using SqlClient.SqlBulkCopy to try and bulk copy a csv file into a database. I am getting the following error after calling the ..WriteToServer method. "The given value of type String from the data source cannot be converted to type bit of the specified target column." Here is my code, dt.Columns.Add("IsDeleted", typeof(byte)); dt.Columns.Add(new DataColumn("CreatedDate", typeof(DateTime))); foreach (DataRow dr in dt.Rows) { if (dr["MobileNo2"] == "" && dr["DriverName2"] == "") { dr["MobileNo2"] = null; dr["DriverName2"] = ""; } dr["IsDeleted"] = Convert.ToByte(0); dr["CreatedDate"] = Convert.ToDateTime(System.DateTime.Now.ToString()); } string connectionString = System.Configuration.ConfigurationManager. ConnectionStrings["connectionString"].ConnectionString; SqlBulkCopy sbc = new SqlBulkCopy(connectionString); sbc.DestinationTableName = "DailySchedule"; sbc.ColumnMappings.Add("WirelessId", "WirelessId"); sbc.ColumnMappings.Add("RegNo", "RegNo"); sbc.ColumnMappings.Add("DriverName1", "DriverName1"); sbc.ColumnMappings.Add("MobileNo1", "MobileNo1"); sbc.ColumnMappings.Add("DriverName2", "DriverName2"); sbc.ColumnMappings.Add("MobileNo2", "MobileNo2"); sbc.ColumnMappings.Add("IsDeleted", "IsDeleted"); sbc.ColumnMappings.Add("CreatedDate", "CreatedDate"); sbc.WriteToServer(dt); sbc.Close(); There is no error when running under visual studio developement server but it gives me an error when running under iis..... Here is my sql server table details, [Id] [int] IDENTITY(1,1) NOT NULL, [WirelessId] [int] NULL, [RegNo] [nvarchar](50) NULL, [DriverName1] [nvarchar](50) NULL, [MobileNo1] [numeric](18, 0) NULL, [DriverName2] [nvarchar](50) NULL, [MobileNo2] [numeric](18, 0) NULL, [IsDeleted] [tinyint] NULL, [CreatedDate] [datetime] NULL,

    Read the article

  • Classes, methods, and polymorphism in Python

    - by Morlock
    I made a module prototype for building complex timer schedules in python. The classe prototypes permit to have Timer objects, each with their waiting times, Repeat objects that group Timer and other Repeat objects, and a Schedule class, just for holding a whole construction or Timers and Repeat instances. The construction can be as complex as needed and needs to be flexible. Each of these three classes has a .run() method, permitting to go through the whole schedule. Whatever the Class, the .run() method either runs a timer, a repeat group for a certain number of iterations, or a schedule. Is this polymorphism-oriented approach sound or silly? What are other appropriate approaches I should consider to build such a versatile utility that permits to put all building blocks together in as complex a way as desired with simplicity? Thanks! Here is the module code: ##################### ## Importing modules from time import time, sleep ##################### ## Class definitions class Timer: """ Timer object with duration. """ def __init__(self, duration): self.duration = duration def run(self): print "Waiting for %i seconds" % self.duration wait(self.duration) chime() class Repeat: """ Repeat grouped objects for a certain number of repetitions. """ def __init__(self, objects=[], rep=1): self.rep = rep self.objects = objects def run(self): print "Repeating group for %i times" % self.rep for i in xrange(self.rep): for group in self.objects: group.run() class Schedule: """ Groups of timers and repetitions. Maybe redundant with class Repeat. """ def __init__(self, schedule=[]): self.schedule = schedule def run(self): for group in self.schedule: group.run() ######################## ## Function definitions def wait(duration): """ Wait a certain number of seconds. """ time_end = time() + float(duration) #uncoment for minutes# * 60 time_diff = time_end - time() while time_diff > 0: sleep(1) time_diff = time_end - time() def chime(): print "Ding!"

    Read the article

  • How do i use signtool.exe correctly in hudson running as a service?

    - by Tim
    I just purchased a code signing cert (MS authenticode) from THAWTE and have installed it apparently on my build machine. I am logged in as a user and when I open a cmd prompt I can sign EXEs using the cert with signtool.exe. Unfortunately this same command line does not work in the hudson process that is running on the machine. the error message I get is: SignTool Error: No certificates were found that met all the given criteria. I presume this is because the hudson service is running under a different account than the account that I ran signtool.exe from and from the account I used to get the cert from thawte. So, my question is: How do I fix this problem? I thought i was going to download a file from thawte, but instead it just used IE somehow to install the cert in the user's cache magically. I probably want to export (or whatever the correct term is) to a file that I can store/save or use on any other machine. How do i do that and how do I call signtool correctly with either the file or the cert from another user in the system/services account?

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • Silverlight unit testing. Error while running tests.

    - by 1gn1ter
    I'm using VS2010. Silverlight 4, NUnit 2.5.5, and TypeMock TypemockIsolatorSetup6.0.3.619.msi In the test project MVVM is implemented, PeopleViewModel is a ViewModel which I want to test. Please advise if you use other products for unit testing of MVVM Silverlight. Or please help to win this TypeMock. TIA This is the code of the test: [Test] [SilverlightUnitTest] public void SomeTestAgainstSilverlight() { PeopleViewModel o = new PeopleViewModel(); var res = o.People; Assert.AreEqual(15, res.Count()); } While running the test in ReSharper i get the following error: TestA.SomeTestAgainstSilverlight : Failed****************************************** *Loading Silverlight Isolation Aspects...* ****************************************** TEST RESULTS: --------------------------------------------- System.MissingMethodException : Method not found: 'hv TypeMock.ArrangeActAssert.Isolate.a(System.Delegate)'. at a4.a(ref Delegate A_0) at a4.a(Boolean A_0) at il.b() at CThru.Silverlight.SilverlightUnitTestAttribute.Init() at CThru.Silverlight.SilverlightUnitTestAttribute.Execute() at TypeMock.MockManager.a(String A_0, String A_1, Object A_2, Object A_3, Boolean A_4, Object[] A_5) at TypeMock.InternalMockManager.getReturn(Object that, String typeName, String methodName, Object methodParameters, Boolean isInjected) at Tests.TestA.SomeTestAgainstSilverlight() in TestA.cs: line 21 While running test in NUnit i get: Tests.TestA.SomeTestAgainstSilverlight: System.DllNotFoundException : Unable to load DLL 'agcore': The specified module could not be found. (Exception from HRESULT: 0x8007007E) at MS.Internal.XcpImports.Application_GetCurrentNative(IntPtr context, IntPtr& obj) at MS.Internal.XcpImports.Application_GetCurrent(IntPtr& pApp) at System.Windows.Application.get_Current() at ViewModelExample.ViewModel.ViewModelBase.get_IsDesignTime() in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\ViewModelBase.cs:line 20 at ViewModelExample.ViewModel.PeopleViewModel..ctor(IServiceAgent serviceAgent) in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\PeopleViewModel.cs:line 28 at ViewModelExample.ViewModel.PeopleViewModel..ctor() in C:\Documents and Settings\USER\Desktop\ViewModelExample\ViewModelExample\ViewModel\PeopleViewModel.cs:line 24 at Tests.TestA.SomeTestAgainstSilverlight() in C:\Documents and Settings\USER\Desktop\ViewModelExample\Tests\TestA.cs:line 22

    Read the article

  • Problem running python/matplotlib in background after ending ssh session.

    - by Jamie
    Hi there, I have to VPN and then ssh from home to my work server and want to run a python script in the background, then log out of the ssh session. My script makes several histogram plots using matplotlib, and as long as I keep the connection open everything is fine, but if I log out I keep getting an error message in the log file I created for the script. File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 2058, in loglog ax = gca() File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 582, in gca ax = gcf().gca(**kwargs) File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 276, in gcf return figure() File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/pyplot.py", line 254, in figure **kwargs) File "/Home/eud/jmcohen/.local/lib/python2.5/site-packages/matplotlib/backends/backend_tkagg.py", line 90, in new_figure_manager window = Tk.Tk() File "/Home/eud/jmcohen/.local/lib/python2.5/lib-tk/Tkinter.py", line 1647, in __init__ self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use) _tkinter.TclError: couldn't connect to display "localhost:10.0" I'm assuming that it doesn't know where to create the figures I want since I close my X11 ssh session. If I'm logged in while the script is running I don't see any figures popping up (although that's because I don't have the show() command in my script), and I thought that python uses tkinter to display figures. The way that I'm creating the figures is, loglog() hist(list,x) ylabel('y') xlabel('x') savefig('%s_hist.ps' %source.name) close() The script requires some initial input, so the way I'm running it in the background is python scriptToRun.py << start>& logfile.log& Is there a way around this, or do I just have to stay ssh'd into my machine? Thanks.

    Read the article

  • How to keep the CPU usage down while running an SDL program?

    - by budwiser
    I've done a very basic window with SDL and want to keep it running until I press the X on window. #include "SDL.h" const int SCREEN_WIDTH = 640; const int SCREEN_HEIGHT = 480; int main(int argc, char **argv) { SDL_Init( SDL_INIT_VIDEO ); SDL_Surface* screen = SDL_SetVideoMode( SCREEN_WIDTH, SCREEN_HEIGHT, 0, SDL_HWSURFACE | SDL_DOUBLEBUF ); SDL_WM_SetCaption( "SDL Test", 0 ); SDL_Event event; bool quit = false; while (quit != false) { if (SDL_PollEvent(&event)) { if (event.type == SDL_QUIT) { quit = true; } } SDL_Delay(80); } SDL_Quit(); return 0; } I tried adding SDL_Delay() at the end of the while-clause and it worked quite well. However, 80 ms seemed to be the highest value I could use to keep the program running smoothly and even then the CPU usage is about 15-20%. Is this the best way to do this and do I have to just live with the fact that it eats this much CPU already on this point?

    Read the article

  • java - question about thread abortion and deadlock - volatile keyword

    - by Tiyoal
    Hello all, I am having some troubles to understand how I have to stop a running thread. I'll try to explain it by example. Assume the following class: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; interrupt(); } } Assume the thread is running and the following statement is evaluated to true: while (someObject.someCondition() == false && running) Then, another thread calls MyThread.halt(). Eventhough this function sets 'running' to false (which is a volatile boolean) and interrupts the thread, the following statement is still executed: someObject.wait(); We have a deadlock. The thread will never be halted. Then I came up with this, but I am not sure if it is correct: public class MyThread extends Thread { protected volatile boolean running = true; public void run() { while (running) { synchronized (someObject) { while (someObject.someCondition() == false && running) { try { someObject.wait(); } catch (InterruptedException e) { e.printStackTrace(); } } // do something useful with someObject } } } public void halt() { running = false; synchronized(someObject) { interrupt(); } } } Is this correct? Is this the most common way to do this? This seems like an obvious question, but I fail to come up with a solution. Thanks a lot for your help.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >