Search Results

Search found 29101 results on 1165 pages for 'thread local storage'.

Page 113/1165 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Local DNS server (bind) and the router DHCP

    - by Luca
    I just set up an internal http server for internal use (I set up Redmine), in a small network (30 or so PCs). I set up the http server on a virtual box ubuntu, that runs also the DNS server (bind). In the DNS lookup I added the Redmine server name (redmine.engserver <- 192.168.1.14) and as forwarders the outside ISP DNS IP adresses. I am using a small wi-fi router (ASUS RT-N66U) as DHCP (and as gateway). In the DHCP config page I set up as DNS the ubuntu server IP (it is fixed 192.168.1.14). Now when I connect a new PC to the network, the DHCP router issues its new IP and as DNS servers it issues: primary: 192.168.1.14 (ubuntu machine) and seconary 192.168.1.1 (the router itself). ipconfig /all Default Gateway . . . . . . . . . : 192.168.1.1 DHCP Server . . . . . . . . . . . : 192.168.1.1 DHCPv6 IAID . . . . . . . . . . . : 248539109 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-15-AA-3F-D0-67-E5-49-A7-EF DNS Servers . . . . . . . . . . . : 192.168.1.14 192.168.1.1 NetBIOS over Tcpip. . . . . . . . : Enabled Before changing the DHCP setting on the router, I would always get only one DNS server: 192.168.1.1 (which uses probably DNS forwarding to external public DNS services). The problem is this: If in my browser I type www.google.com, it works all the time. If in the browser I type http://redmine.engserver/ it works most of the time, but sometimes it ends up with a yahoo page search or something else. In the DNS cache it shows as (Server not found). ipconfig /displaydns I looked with wireshark and it seems like sometimes the client PC interrogates the secondary DNS (192.168.1.1) instead of the first 192.168.1.14. Obviously this one is a public domain and it does not have the redmine.engserver entry. What is wrong in this configuration? Is it even legitimate to have 2 DNS (one internal and one forwarded by the router) which are inconsistent? Is there another way to have a local name service in a small office network? Why is the router DHCP issuing itself as DNS?

    Read the article

  • ASA 5505 stops local internet when connected to VPN

    - by g18c
    Hi I have a Cisco ASA router running firmware 8.2(5) which hosts an internal LAN on 192.168.30.0/24. I have used the VPN Wizard to setup L2TP access and I can connect in fine from a Windows box and can ping hosts behind the VPN router. However, when connected to the VPN I can no longer ping out to my internet or browse web pages. I would like to be able to access the VPN, and also browse the internet at the same time - I understand this is called split tunneling (have ticked the setting in the wizard but to no effect) and if so how do I do this? Alternatively, if split tunneling is a pain to setup, then making the connected VPN client have internet access from the ASA WAN IP would be OK. Thanks, Chris names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Vlan1 nameif inside security-level 100 ip address 192.168.30.1 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address 208.74.158.58 255.255.255.252 ! ftp mode passive access-list inside_nat0_outbound extended permit ip any 10.10.10.0 255.255.255.128 access-list inside_nat0_outbound extended permit ip 192.168.30.0 255.255.255.0 192.168.30.192 255.255.255.192 access-list DefaultRAGroup_splitTunnelAcl standard permit 192.168.30.0 255.255.255.0 access-list DefaultRAGroup_splitTunnelAcl_1 standard permit 192.168.30.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool LANVPNPOOL 192.168.30.220-192.168.30.249 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 192.168.30.0 255.255.255.0 route outside 0.0.0.0 0.0.0.0 208.74.158.57 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.30.0 255.255.255.0 inside snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA esp-3des esp-sha-hmac crypto ipsec transform-set TRANS_ESP_3DES_SHA mode transport crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 TRANS_ESP_3DES_SHA crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet timeout 5 ssh timeout 5 console timeout 0 dhcpd auto_config outside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept webvpn group-policy DefaultRAGroup internal group-policy DefaultRAGroup attributes dns-server value 192.168.30.3 vpn-tunnel-protocol l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value DefaultRAGroup_splitTunnelAcl_1 username user password Cj7W5X7wERleAewO8ENYtg== nt-encrypted privilege 0 tunnel-group DefaultRAGroup general-attributes address-pool LANVPNPOOL default-group-policy DefaultRAGroup tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group DefaultRAGroup ppp-attributes no authentication chap authentication ms-chap-v2 ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect rsh inspect rtsp inspect esmtp inspect sqlnet inspect skinny inspect sunrpc inspect xdmcp inspect sip inspect netbios inspect tftp inspect ip-options ! service-policy global_policy global prompt hostname context : end

    Read the article

  • Mac OS X roaming profile from Samba with OpenLDAP backend on Ubuntu 11.10

    - by Sam Hammamy
    I have been battling for a week now to get my Mac (Mountain Lion) to authenticate on my home network's OpenLDAP and Samba. From several sources, like the Ubuntu community docs, and other blogs, and after a hell of a lot of trial and error and piecing things together, I have created a samba.ldif that will pass the smbldap-populate when combined with apple.ldif and I have a fully functional OpenLDAP server and a Samba PDC that uses LDAP to authenticate the OS X Machine. The problem is that when I login, the home directory is not created or pulled from the server. I get the following in system.log Sep 21 06:09:15 Sams-MacBook-Pro.local SecurityAgent[265]: User info context values set for sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got ruser: (null) Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Got service: authorization Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): no authauth availale for user. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_principal_for_user(): failed: 7 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Failed to determine Kerberos principal name. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Done cleanup3 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): Kerberos 5 refuses you Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_authenticate(): pam_sm_authenticate: ntlm Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_acct_mgmt(): OpenDirectory - Membership cache TTL set to 1800. Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in od_record_check_pwpolicy(): retval: 0 Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Establishing credentials Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Got user: sam Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): Context initialised Sep 21 06:09:15 Sams-MacBook-Pro.local authorizationhost[270]: in pam_sm_setcred(): pam_sm_setcred: ntlm user sam doesn't have auth authority All that's great and good and I authenticate. Then I get CFPreferences: user home directory for user kCFPreferencesCurrentUser at /Network/Servers/172.17.148.186/home/sam is unavailable. User domains will be volatile. Failed looking up user domain root; url='file://localhost/Network/Servers/172.17.148.186/home/sam/' path=/Network/Servers/172.17.148.186/home/sam/ err=-43 uid=9000 euid=9000 If you're wondering where /Network/Servers/IP/home/sam comes from, it's from a couple of blogs that said the OpenLDAP attribute apple-user-homeDirectory should have that value and the NFSHomeDirectory on the mac should point to apple-user-homeDirectory I also set the attr apple-user-homeurl to <home_dir><url>smb://172.17.148.186/sam/</url><path></path></home_dir> which I found on this forum. Any help is appreciated, because I'm banging my head against the wall at this point. By the way, I intend to create a blog on my vps just for this, and create an install script in python that people can download so no one has to go through what I've had to go through this week :) After some sleep I am going to try to login from a windows machine and report back here. Thanks Sam

    Read the article

  • MySQL Cluster 7.2: Over 8x Higher Performance than Cluster 7.1

    - by Mat Keep
    0 0 1 893 5092 Homework 42 11 5974 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} Summary The scalability enhancements delivered by extensions to multi-threaded data nodes enables MySQL Cluster 7.2 to deliver over 8x higher performance than the previous MySQL Cluster 7.1 release on a recent benchmark What’s New in MySQL Cluster 7.2 MySQL Cluster 7.2 was released as GA (Generally Available) in February 2012, delivering many enhancements to performance on complex queries, new NoSQL Key / Value API, cross-data center replication and ease-of-use. These enhancements are summarized in the Figure below, and detailed in the MySQL Cluster New Features whitepaper Figure 1: Next Generation Web Services, Cross Data Center Replication and Ease-of-Use Once of the key enhancements delivered in MySQL Cluster 7.2 is extensions made to the multi-threading processes of the data nodes. Multi-Threaded Data Node Extensions The MySQL Cluster 7.2 data node is now functionally divided into seven thread types: 1) Local Data Manager threads (ldm). Note – these are sometimes also called LQH threads. 2) Transaction Coordinator threads (tc) 3) Asynchronous Replication threads (rep) 4) Schema Management threads (main) 5) Network receiver threads (recv) 6) Network send threads (send) 7) IO threads Each of these thread types are discussed in more detail below. MySQL Cluster 7.2 increases the maximum number of LDM threads from 4 to 16. The LDM contains the actual data, which means that when using 16 threads the data is more heavily partitioned (this is automatic in MySQL Cluster). Each LDM thread maintains its own set of data partitions, index partitions and REDO log. The number of LDM partitions per data node is not dynamically configurable, but it is possible, however, to map more than one partition onto each LDM thread, providing flexibility in modifying the number of LDM threads. The TC domain stores the state of in-flight transactions. This means that every new transaction can easily be assigned to a new TC thread. Testing has shown that in most cases 1 TC thread per 2 LDM threads is sufficient, and in many cases even 1 TC thread per 4 LDM threads is also acceptable. Testing also demonstrated that in some instances where the workload needed to sustain very high update loads it is necessary to configure 3 to 4 TC threads per 4 LDM threads. In the previous MySQL Cluster 7.1 release, only one TC thread was available. This limit has been increased to 16 TC threads in MySQL Cluster 7.2. The TC domain also manages the Adaptive Query Localization functionality introduced in MySQL Cluster 7.2 that significantly enhanced complex query performance by pushing JOIN operations down to the data nodes. Asynchronous Replication was separated into its own thread with the release of MySQL Cluster 7.1, and has not been modified in the latest 7.2 release. To scale the number of TC threads, it was necessary to separate the Schema Management domain from the TC domain. The schema management thread has little load, so is implemented with a single thread. The Network receiver domain was bound to 1 thread in MySQL Cluster 7.1. With the increase of threads in MySQL Cluster 7.2 it is also necessary to increase the number of recv threads to 8. This enables each receive thread to service one or more sockets used to communicate with other nodes the Cluster. The Network send thread is a new thread type introduced in MySQL Cluster 7.2. Previously other threads handled the sending operations themselves, which can provide for lower latency. To achieve highest throughput however, it has been necessary to create dedicated send threads, of which 8 can be configured. It is still possible to configure MySQL Cluster 7.2 to a legacy mode that does not use any of the send threads – useful for those workloads that are most sensitive to latency. The IO Thread is the final thread type and there have been no changes to this domain in MySQL Cluster 7.2. Multiple IO threads were already available, which could be configured to either one thread per open file, or to a fixed number of IO threads that handle the IO traffic. Except when using compression on disk, the IO threads typically have a very light load. Benchmarking the Scalability Enhancements The scalability enhancements discussed above have made it possible to scale CPU usage of each data node to more than 5x of that possible in MySQL Cluster 7.1. In addition, a number of bottlenecks have been removed, making it possible to scale data node performance by even more than 5x. Figure 2: MySQL Cluster 7.2 Delivers 8.4x Higher Performance than 7.1 The flexAsynch benchmark was used to compare MySQL Cluster 7.2 performance to 7.1 across an 8-node Intel Xeon x5670-based cluster of dual socket commodity servers (6 cores each). As the results demonstrate, MySQL Cluster 7.2 delivers over 8x higher performance per data nodes than MySQL Cluster 7.1. More details of this and other benchmarks will be published in a new whitepaper – coming soon, so stay tuned! In a following blog post, I’ll provide recommendations on optimum thread configurations for different types of server processor. You can also learn more from the Best Practices Guide to Optimizing Performance of MySQL Cluster Conclusion MySQL Cluster has achieved a range of impressive benchmark results, and set in context with the previous 7.1 release, is able to deliver over 8x higher performance per node. As a result, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs.

    Read the article

  • Google Search api for Android systems

    - by jrharshath
    Hi, I'm trying to build an android app that would do a local search on google. I know there is a Google Search API for Java, and I am able to use it for a desktop application. However, when I use the same jar file (gsearch.jar) in my android project, Some problems arise. When I call the .localSearch() method of my gsearch.Client object, a runtime error is occurring. The error message is: "java.lang.VerifyError: gsearch.Client". This message is occurring in the Dalvik Debug Monitor log. So what is the problem here? Can I not use the search API on the android? More importantly, how do I do a local search from an android app? Does the android sdk have search APIs inbuilt? I could only find the Maps api, and Map search is not what I'm looking for.. Thanks for the help, jrh

    Read the article

  • express+jade: provided local variable is undefined in view (node.js + express + jade)

    - by Jake
    Hello. I'm implementing a webapp using node.js and express, using the jade template engine. Templates render fine, and can access helpers and dynamic helpers, but not local variables other than the "body" local variable, which is provided by express and is available and defined in my layout.jade. This is some of the code: app.set ('view engine', 'jade'); app.get ("/test", function (req, res) { res.render ('test', { locals: { name: "jake" } }); }); and this is test.jade: p hello =name when I remove the second line (referencing name), the template renders correctly, showing the word "hello" in the web page. When I include the =name, it throws a ReferenceError: 500 ReferenceError: Jade:2 NaN. 'p hello' NaN. '=name' name is not defined NaN. 'p hello' NaN. '=name' I believe I'm following the jade and express examples exactly with respect to local variables. Am I doing something wrong, or could this be a bug in express or jade?

    Read the article

  • Excluding child processes from ps

    - by stefpet
    Background: To reload app configuration I need to kill -HUP the parent processes' PIDs. To find PIDs I currently use ps auxf | grep gunicorn with the following example output: $ ps auxf | grep gunicorn stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4225 0.0 0.4 76920 16332 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4226 0.0 0.4 76932 16340 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4227 0.0 0.4 76940 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4228 0.0 0.4 76948 16344 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4229 0.0 0.4 76960 16356 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4230 0.0 0.4 76972 16368 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4231 0.0 0.4 78856 18644 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 4232 0.0 0.4 76992 16376 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5685 0.0 0.0 22076 908 pts/1 S+ 11:50 0:00 | \_ grep --color=auto gunicorn stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5021 0.0 0.4 77656 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5022 0.0 0.4 77664 17156 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5023 0.0 0.4 77672 17164 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5024 0.0 0.4 77684 17196 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5025 0.0 0.4 77692 17200 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5026 0.0 0.4 77700 17208 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5027 0.0 0.4 77712 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py stpe 5028 0.0 0.4 77720 17220 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py Based on the above I see that it is 4222 and 5012 I need to HUP. Question: How can I exclude the child processes and only get the parent process (please note however that the processes I want do also have a parent (e.g. bash) that I'm uninterested with)? Using a regexp with grep on how much indentation there is in the ascii tree feels dirty. Is there a better way? Example: The desired output would be something like this. stpe 4222 0.0 0.2 64524 11668 pts/2 S+ 11:01 0:00 | \_ /usr/bin/python /usr/local/bin/gunicorn app_api:app -c app_api.ini.py stpe 5012 0.0 0.2 64512 11656 pts/3 S+ 11:22 0:00 \_ /usr/bin/python /usr/local/bin/gunicorn app_game_api:app -c app_game_api.ini.py This would be easily parseable to be able to automatically find the PIDs in a script that does the HUPing which is the goal.

    Read the article

  • How can I display locally stored images on an internet website?

    - by ropstah
    Hi, i would like to display images on my website that are stored on a visitors local filesystem. Assuming I have the location of the image on the visitors drive (e.g. c:\Documents And Settings\Ropstah\image.png), is it then possible for me to display this image in my internet website (e.g. www.website.com)? The images won't seem to load when i use the following syntax (Internet Explorer 7, Firefox 3 etc..): <img src="file://c:\Documents and Settings\Ropstah\image.png" /> The images DO display if the .html file (which i use on website.com/index.html) is located on my local pc...

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Unattended Install of SQL Server 2005 Express with LOCAL Server InstanceName

    - by Jeff
    I'm creating an install package using InnoSetup and installing SQL Server 2005 Express. Here's the code below that appears in my RUN section: Filename: "{app}\SQL Server 2005 Express\SQLEXPR.exe" ; Parameters: "-q /norebootchk /qn reboot=ReallySuppress addlocal=all INSTANCENAME=(LOCAL) SCCCHECKLEVEL=IncompatibleComponents:1;MDAC25Version:0 ERRORREPORTING=2 SQLAUTOSTART=1 SAPWD=passwordhere SECURITYMODE=SQL"; WorkingDir: {app}\SQL Server 2005 Express; StatusMsg: Installing Microsoft SQL Server 2005 Express... Please Wait...;Check:SQLVerifyInstall What I'm trying to accomplish is have the SQL Server package install but only have the instance name itself reference the name of the machine name and nothing more. What I'm receiving instead is a named instance instead of local such as MachineName\SQLEXPRESS which is not what I want to receive. I need a local instance instead of a named instance due to the way my code is written to be able to install and talk with the databases in question. I would change it, trust me, were it not the fact that this install package is a replacement to a previous package that used the MSDE installer. I have to be able to support both through code. Any suggestions are welcome but a clear and concise method to get the installer to quietly install using only the machine name is my main goal. Thanks for the help and support!

    Read the article

  • SQLServer using too much memory

    - by Israel Pereira Valverde
    I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express. I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible. With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ? When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again. I have read about an hotfix to solve one of the errors I got from sqlserver: There is insufficient system memory in resource pool 'internal' to run this query But it's already installed on my sqlserver. Why it's using so much memory?

    Read the article

  • Credit Card storage solution

    - by jtnire
    Hi Everyone, I'm developing a solution that is designed to store membership details, as well as credit card details. I'm trying to comply with PCI DSS as much as I can. Here is my design so far: PAN = Primary account number == long number on credit card Server A is a remote server. It stores all membership details (Names, Address etc..) and provides indivudal Key A's for each PAN stored Server B is a local server, and actually holds the encrypted PANs, as well as Key B, and does the decryption. To get a PAN, the client has to authenticate with BOTH servers, ask Server A for the respective Key A, then give Key A to server B, which will return the PAN to the client (provided authentication was sucessful). Server A will only ever encrypt Key A with Server B's public Key, as it will have it beforehand. Server B will probably have to send a salt first though, however I doin't think that has to be encrypted I havn't really thought about any implementation (i.e. coding) specifics yet regarding the above, however the solution is using Java's Cajo framework (wrapper for RMI) so that is how the servers will communicate with each other (Currently, membership details are transfered in this way). The reason why I want Server B to do the decryption, and not the client, is that I am afraid of decryption keys going into the client's RAM, even though it's probably just as bad on the server... Can anyone see anything wrong with the above design? It doesn't matter if the above has to be changed. Thanks jtnire

    Read the article

  • Google AppEngine + Local JUnit Tests + Jersey framework + Embedded Jetty

    - by xamde
    I use Google Appengine for Java (GAE/J). On top, I use the Jersey REST-framework. Now i want to run local JUnit tests. The test sets up the local GAE development environment ( http://code.google.com/appengine/docs/java/tools/localunittesting.html ), launches an embedded Jetty server, and then fires requests to the server via HTTP and checks responses. Unfortunately, the Jersey/Jetty combo spawns new threads. GAE expects only one thread to run. In the end, I end up having either no datstore inside the Jersey-resources or multiple, having different datastore. As a workaround I initialise the GAE local env only once, put it in a static variable and inside the GAE resource I add many checks (This threads has no dev env? Re-use the static one). And these checks should of course only run inside JUnit tests.. (which I asked before: "How can I find out if code is running inside a JUnit test or not?" - I'm not allowed to post the link directly here :-|)

    Read the article

  • Problems with video conversions through the web (local host)

    - by ron-d
    Hello, I get the following errors when I attempt video format conversions called from the local host: “An invalid media type was specified” for M4V to WMV conversions. “One or more arguments are invalid” for MP4 to WMV conversions. Here are the details of the problems: I’ve written a dll in C# that accepts videos in the formats AVI, WMV, M4V and MP4 and performs the following actions: Creates a copy of the input video in WMV format . Creates a WAV file of the input video audio portion. Creates a JPG image from a frame of the input video. I attached the dll to an ASP.NET web project that performs the dll actions. When tested through the developer studio, the actions are performed as intended for all formats. When I place the web project in place to be read when the local host is called through the web browser, the following behavior takes place: WMV format: All actions performed as intended. AVI format: Creates WMV file – OK Creates JPG image – OK Creates empty WAV file – problem. M4V format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “An invalid media type was specified” MP4 format: Creates empty WAV file – problem. Does not create WMV file -problem Does not create JPG file –problem Throws me the error “One or more arguments are invalid” When I check their security property, all the files have the same permission access parameters (when I check their security property. Can anyone guide me as to how to solve these problems when the web project is called from the local host? Thank you.

    Read the article

  • Maven. What to do with "homeless" jars?

    - by Jake
    I have some proprietary.jar that I need to include in my project, but I don't wish to install it to the local repository. What I did initially was to put the jar into version control in my project's lib/ folder, and then specify the Maven dependency as: <!-- LOCAL DEPENDENCY --> <dependency> <groupId>topsecret</groupId> <artifactId>proprietary</artifactId> <version>0.0.1</version> <scope>system</scope> <systemPath>${basedir}/lib/java/proprietary.jar</systemPath> </dependency> However, this becomes a big problem when my project becomes someone else's dependency. Maven will not be able to validate this POM because the path is not absolute. What is the best way to overcome this problem?

    Read the article

  • Thread feeding other MultiThreading

    - by alaamh
    I see it's easy to open pipe between two process using fork, but how we can passing open pipe to threads. Assume we need to pass out of PROGRAM A to PROGRAM B "may by more than one thread", PROGRAM B send his output to PROGRAM C #include <stdio.h> #include <stdlib.h> #include <pthread.h> struct targ_s { char* reader; }; void *thread1(void *arg) { struct targ_s *targ = (struct targ_s*) arg; int status, fd[2]; pid_t pid; pipe(fd); pid = fork(); if (pid == 0) { int fd = fileno( targ->fd_reader ); dup2(STDIN_FILENO, fd); close(fd[0]); dup2(fd[1], STDOUT_FILENO); close(fd[1]); execvp ("PROGRAM B", NULL); exit(1); } else { close(fd[1]); dup2(fd[0], STDIN_FILENO); close(fd[0]); execl("PROGRAM C", NULL); wait(&status); return NULL; } } int main(void) { FILE *fpipe; char *command = "PROGRAM A"; char buffer[1024]; if (!(fpipe = (FILE*) popen(command, "r"))) { perror("Problems with pipe"); exit(1); } char* outfile = "out.dat"; FILE* f = fopen (outfile, "wb"); int fd = fileno( f ); struct targ_s targ; targ.fd_reader = outfile; pthread_t thid; if (pthread_create(&thid, NULL, thread1, &targ) != 0) { perror("pthread_create() error"); exit(1); } int len; while (read(fpipe, buffer, sizeof (buffer)) != 0) { len = strlen(buffer); write(fd, buffer, len); } pclose(fpipe); return (0); }

    Read the article

  • No matter what, I can't get this stupid progress bar to update from a thread!

    - by Synthetix
    I have a Windows app written in C (using gcc/MinGW) that works pretty well except for a few UI problems. One, I simply cannot get the progress bar to update from a thread. In fact, I probably can't get ANY UI stuff to update. Basically, I have a spawned thread that does some processing, and from that thread I attempt to update the progress bar in the main thread. I tried this by using PostMessage() to the main hwnd, but no luck even though I can do other things like open message boxes. However, it's unclear whether the message box is getting called within the thread or on the main thread. Here's some code: //in header/globally accessible HWND wnd; //main application window HWND progress_bar; //progress bar typedef struct { //to pass to thread DWORD mainThreadId; HWND mainHwnd; char *filename; } THREADSTUFF; //callback function LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam){ switch(msg){ case WM_CREATE:{ //create progress bar progress_bar = CreateWindowEx( 0, PROGRESS_CLASS, (LPCTSTR)NULL, WS_CHILD | WS_VISIBLE, 79,164,455,15, hwnd, (HMENU)20, NULL, NULL); break; } case WM_COMMAND:{ if(LOWORD(wParam)==2){ //do some processing in a thread //struct of stuff I need to pass to thread THREADSTUFF *threadStuff; threadStuff = (THREADSTUFF*)malloc(sizeof(*threadStuff)); threadStuff->mainThreadId = GetCurrentThreadId(); threadStuff->mainHwnd = hwnd; threadStuff->filename = (void*)&filename; hThread1 = CreateThread(NULL,0,convertFile (LPVOID)threadStuff,0,NULL); }else if(LOWORD(wParam)==5){ //update progress bar MessageBox(hwnd,"I got a message!", "Message", MB_OK | MB_ICONINFORMATION); PostMessage(progress_bar,PBM_STEPIT,0,CLR_DEFAULT); } break; } } } This all seems to work okay. The problem is in the thread: DWORD WINAPI convertFile(LPVOID params){ //get passed params, this works perfectly fine THREADSTUFF *tData = (THREADSTUFF*)params; MessageBox(tData->mainHwnd,tData->filename,"File name",MB_OK | MB_ICONINFORMATION); //yep PostThreadMessage(tData->mainThreadId,WM_COMMAND,5,0); //only shows message PostMessage(tData->mainHwnd,WM_COMMAND,5,0); //only shows message } When I say, "only shows message," that means the MessageBox() function in the callback works, but not the PostMessage() to update the position of the progress bar. What am I missing?

    Read the article

  • Raspberry Pi broadcast serial port data to local network

    - by D051P0
    I didn't find anything to help me with this problem. What I want is: Serial device sends repeatedly some data to serial port. Raspberry Pi should get this data from RxD and stream it to local network via port 10001 without filtering it. So I can find this device on my pc. This should also work in other direction: Raspberry listen to port 10001 and forward all data from local network to TxD. I'm newbie in Linux World. How can I listen to some port on Raspberry Pi and send broadcast to the same port? I'm using Raspbian Wheezy with soft float. I have found a library Pi4j for Java, that I already use to get and write data from/to serial port. final Serial serial = SerialFactory.createInstance(); serial.addListener(new SerialDataListener() { public void dataReceived(SerialDataEvent event) { forward(event.getData()); } }); event.getData() is a String, which I want to broadcast in my local network. Is it generally a good Idea to use Java for that? I need also a String from port 10001, which I can forward to serial port.

    Read the article

  • File URI link to local folder in IE7 not working

    - by Kakmonstret
    No matter what I do I cannot get either of these local File URIs: <a href="file://C:/Folder/">...</a> <a href="file://C|/Folder/">...</a> <a href="C:\">...</a> ... ...to work in IE7 (on Vista). I've tried putting the site in different zones (Local Intranet, Trusted Sites), turning on/off Protected Mode and fiddling with the security settings for the active zone. I've also tried many variations of the URI. But when I click the links, nothing happens. No errors either. Well-formed file URIs to the local file system work in all other browsers I've tried, including IE6 and IE8. I've also gathered from what I've read on the web that this should work. Also, the following JavaScript results in an access-denied error, regardless of zone/settings: <a href="#" onclick="return window.open('C:\\Folder\\');">#</a> Did this stop working in the latest IE7 and/or the latest Vista, or something? And if so, why is it working again in IE8 on Windows 7? Is there a workaround of some kind? (I don't know how to test with IE7 on anything other than Vista.)

    Read the article

  • converting a UTC time to a local time zone in Java

    - by aloo
    I know this subject has been beaten to death but after searching for a few hours to this problem I had to ask. My Problem: do calculations on dates on a server based on the current time zone of a client app (iphone). The client app tells the server, in seconds, how far away its time zone is away from GMT. I would like to then use this information to do computation on dates in the server. The dates on the server are all stored as UTC time. So I would like to get the HOUR of a UTC Date object after it has been converted to this local time zone. My current attempt: int hours = (int) Math.floor(secondsFromGMT / (60.0 * 60.0)); int mins = (int) Math.floor((secondsFromGMT - (hours * 60.0 * 60.0)) / 60.0); String sign = hours > 0 ? "+" : "-"; Calendar now = Calendar.getInstance(); TimeZone t = TimeZone.getTimeZone("GMT" + sign + hours + ":" + mins); now.setTimeZone(t); now.setTime(someDateTimeObject); int hourOfDay = now.get(Calendar.HOUR_OF_DAY); The variables hour and mins represent the hour and mins the local time zone is away from GMT. After debugging this code - the variables hour, mins and sign are correct. The problem is hourOfDay does not return the correct hour - it is returning the hour as of UTC time and not local time. Ideas?

    Read the article

  • In C# ,how do I terminate a thread that has had its call stack corrupted?

    - by Emil D
    I have a thread in my application that is running code that can potentially cause call stack corruption ( my application is a testing tool for dlls ). Assuming that I have a method of detecting if the child thread is misbehaving, how would I terminate it? From what I read, calling Thread.Abort() on the misbehaving thread would be equivalent to raising an exception inside it.I fear that that not be a good idea, provided the call stack of the thread might be corrupted.Any suggestions?

    Read the article

  • Is `super` local variable?

    - by Michael
    // A : Parent @implementation A -(id) init { // change self here then return it } @end A A *a = [[A alloc] init]; a. Just wondering, if self is a local variable or global? If it's local then what is the point of self = [super init] in init? I can successfully define some local variable and use like this, why would I need to assign it to self. -(id) init { id tmp = [super init]; if(tmp != nil) { //do stuff } return tmp; } b. If [super init] returns some other object instance and I have to overwrite self then I will not be able to access A's methods any more, since it will be completely new object? Am I right? c. super and self pointing to the same memory and the major difference between them is method lookup order. Am I right? sorry, don't have Mac to try, learning theory as for now...

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >