Search Results

Search found 17808 results on 713 pages for 'installation package'.

Page 683/713 | < Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >

  • How to run node.js app on port 80? Are processes blocking my port?

    - by Lucas
    I believe the port 80 on my remote instance is blocked, and I am trying to run a node.js app using port 80. I have experimented with ports 3000 and 3002, and both ports are working fine, but I get an error when running on port 80. I suspect port 80 is blocked from my output of netstat -an below, but how can I find the process id's of the addresses that are blocking port 80 below? [lucas@ecoinstance]~/node/nodetest1$ netstat -an Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3002 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:27017 127.0.0.1:51108 ESTABLISHED tcp 0 0 127.0.0.1:51106 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51106 ESTABLISHED tcp 0 0 127.0.0.1:51107 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:3002 174.61.171.61:36583 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51109 ESTABLISHED tcp 0 0 10.240.241.116:42423 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51108 127.0.0.1:27017 ESTABLISHED tcp 0 532 10.240.241.116:22 174.61.171.61:56824 ESTABLISHED tcp 0 0 127.0.0.1:27017 127.0.0.1:51107 ESTABLISHED tcp 0 0 10.240.241.116:42412 169.254.169.254:80 ESTABLISHED tcp 0 0 127.0.0.1:51109 127.0.0.1:27017 ESTABLISHED tcp 0 0 127.0.0.1:51105 127.0.0.1:27017 ESTABLISHED tcp 0 0 10.240.241.116:42422 169.254.169.254:80 TIME_WAIT tcp 0 0 127.0.0.1:27017 127.0.0.1:51105 ESTABLISHED tcp6 0 0 :::22 :::* LISTEN udp 0 0 0.0.0.0:49948 0.0.0.0:* udp 0 0 0.0.0.0:68 0.0.0.0:* udp 0 0 10.240.241.116:123 0.0.0.0:* udp 0 0 127.0.0.1:123 0.0.0.0:* udp 0 0 0.0.0.0:123 0.0.0.0:* udp6 0 0 :::12151 :::* udp6 0 0 :::123 :::* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 405680 /tmp/ssh-KdkxJfFLpKTC/agent.22 813 unix 2 [ ACC ] STREAM LISTENING 408230 /tmp/ssh-ofUeNNEwAqtP/agent.22 243 unix 2 [ ACC ] STREAM LISTENING 416227 /tmp/mongodb-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 /run/udev/control unix 7 [ ] DGRAM 5286 /dev/log unix 2 [ ACC ] STREAM LISTENING 5318 /var/run/acpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 /tmp//tmux-1000/default unix 2 [ ACC ] STREAM LISTENING 414450 /var/run/dbus/system_bus_socke And here is the log when trying to run on port 80 with node.js: [lucas@ecoinstance]~/node/nodetest1$ npm start > [email protected] start /home/lucas/node/nodetest1 > node ./bin/www events.js:72 throw er; // Unhandled 'error' event ^ Error: listen EACCES at errnoException (net.js:904:11) at Server._listen2 (net.js:1023:19) at listen (net.js:1064:10) at Server.listen (net.js:1138:5) at Function.app.listen (/home/lucas/node/nodetest1/node_modules/express/lib/applicati on.js:532:24) at Object.<anonymous> (/home/lucas/node/nodetest1/bin/www:7:18) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) npm ERR! [email protected] start: `node ./bin/www` npm ERR! Exit status 8 npm ERR! npm ERR! Failed at the [email protected] start script. npm ERR! This is most likely a problem with the nodetest1 package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node ./bin/www npm ERR! You can get their info via: npm ERR! npm owner ls nodetest1 npm ERR! There is likely additional logging output above. npm ERR! System Linux 3.13-0.bpo.1-amd64 npm ERR! command "/usr/local/bin/node" "/usr/local/bin/npm" "start" npm ERR! cwd /home/lucas/node/nodetest1 npm ERR! node -v v0.10.28 npm ERR! npm -v 1.4.9 npm ERR! code ELIFECYCLE npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/lucas/node/nodetest1/npm-debug.log npm ERR! not ok code 0 And sudo netstat -lnp does not return any matching port 80's: [lucas@ecoinstance]~/node/nodetest1$ sudo netstat -lnp [48/648] Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Progr am name tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 29160/mon god tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1976/sshd tcp6 0 0 :::22 :::* LISTEN 1976/sshd udp 0 0 0.0.0.0:49948 0.0.0.0:* 1604/dhcl ient udp 0 0 0.0.0.0:68 0.0.0.0:* 1604/dhcl ient udp 0 0 10.240.241.116:123 0.0.0.0:* 2076/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 2076/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 2076/ntpd udp6 0 0 :::12151 :::* 1604/dhcl ient udp6 0 0 :::123 :::* 2076/ntpd Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ACC ] STREAM LISTENING 405680 22814/ssh-agent /tmp/ssh-K dkxJfFLpKTC/agent.22813 unix 2 [ ACC ] STREAM LISTENING 408230 24049/ssh-agent /tmp/ssh-o fUeNNEwAqtP/agent.22243 unix 2 [ ACC ] STREAM LISTENING 416227 29160/mongod /tmp/mongo db-27017.sock unix 2 [ ACC ] SEQPACKET LISTENING 3692 284/udevd /run/udev/ control unix 2 [ ACC ] STREAM LISTENING 5318 1798/acpid /var/run/a cpid.socket unix 2 [ ACC ] STREAM LISTENING 16170 5177/tmux /tmp//tmux -1000/default unix 2 [ ACC ] STREAM LISTENING 414450 28213/dbus-daemon /var/run/d bus/system_bus_socket unix 2 [ ACC ] STREAM LISTENING 404225 22324/1 /tmp/ssh-9 TlDmu4bjl/agent.22324

    Read the article

  • Cross-platform distributed fault-tolerant (disconnected operation/local cache) filesystem

    - by Adrian Frühwirth
    We are facing a design "challenge" where we are required to set up a storage solution with the following properties: What we need HA a scalable storage backend offline/disconnected operation on the client to account for network outages cross-platform access client-side access from certainly Windows (probably XP upwards), possibly Linux backend integrates with AD/LDAP (permission management (user/group management, ...)) should work reasonably well over slow WAN-links Another problem is that we don't really know all possible use cases here, if people need to be able to have concurrent access to shared files or if they will only be accessing their own files, so a possible solution needs to account for concurrent access and how conflict management would look in this case from a user's point of view. This two years old blog posts sums up the impression that I have been getting during the last couple of days of research, that there are lots of current übercool projects implementing (non-Windows) clustered petabyte-capable blob-storage solutions but that there is none that supports disconnected operation nicely and natively, but I am hoping that we have missed an obvious solution. What we have tried OpenAFS We figured that we want a distributed network filesystem with a local cache and tested OpenAFS (which, as the only currently "stable" DFS supporting disconnected operation, seemed the way to go) for a week but there are several problems with it: it's a real pain to set up there are no official RHEL/CentOS packages the package of the current stable version 1.6.5.1 from elrepo randomly kernel panics on fresh installs, this is an absolute no-go Windows support (including the required Kerberos packages) is mystical. The current client for the 1.6 branch does not run on Windows 8, the current client for the 1.7 does but it just randomly crashes. After that experience we didn't even bother testing on XP and Windows 7. Suffice to say, we couldn't get it working and the whole setup has been so unstable and complicated to setup that it's just not an option for production. Samba + Unison Since OpenAFS was a complete disaster and no other DFS seems to support disconnected operation we went for a simpler idea that would sync files against a Samba server using Unison. This has the following advantages: Samba integrates with ADs; it's a pain but can be done. Samba solves the problem of remotely accessing the storage from Windows but introduces another SPOF and does not address the actual storage problem. We could probably stick any clustered FS underneath Samba, but that means we need a HA Samba setup on top of that to maintain HA which probably adds a lot of additional complexity. I vaguely remember trying to implement redundancy with Samba before and I could not silently failover between servers. Even when online, you are working with local files which will result in more conflicts than would be necessary if a local cache were only touched when disconnected It's not automatic. We cannot expect users to manually sync their files using the (functional, but not-so-pretty) GTK GUI on a regular basis. I attempted to semi-automate the process using the Windows task scheduler, but you cannot really do it in a satisfactory way. On top of that, the way Unison works makes syncing against Samba a costly operation, so I am afraid that it just doesn't scale very well or even at all. Samba + "Offline Files" After that we became a little desparate and gave Windows "offline files" a chance. We figured that having something that is inbuilt into the OS would reduce administrative efforts, helps blaming someone else when it's not working properly and should just work since people have been using this for years. Right? Wrong. We really wanted it to work, but it just doesn't. 30 minutes of copying files around and unplugging network cables/disabling network interfaces left us with (silent! there is only a tiny notification in Windows explorer in the statusbar, which doesn't even open Sync Center if you click on it!) undeletable files on the server (!) and conflicts that should not even be conflicts. In the end, we had one successful sync of a tiny text file, everything else just exploded horribly. Beyond that, there are other problems: Microsoft admits that "offline files" in Windows XP cannot cope with "large files" and therefore does not cache/sync them at all which would mean those files become unavailable if the connection drop In Windows 7 the feature is only available in the Professional/Ultimate/Enterprise editions. Summary Unless there is another fault-tolerant DFS that supports Windows natively I assume that stacking a HA Samba cluster on top of something like GlusterFS/Lustre/whatnot is the only option, but I hope that I am wrong here. How do other companies allow fault-tolerant network access to redundant storage in a heterogeneous environment with Windows?

    Read the article

  • Can't deploy rails 4 app on Bluehost with Passenger 4 and nginx

    - by user2205763
    I am at Bluehost (dedicated server) trying to run a rails 4 app. I asked to have my server re-imaged, specifying that I do not want rails, ruby, or passenger install automatically as I wanted to install the latest versions myself using a version manager (Bluehost by default offers rails 2.3, ruby 1.8, and passenger 3, which won't work with my app). I installed ruby 1.9.3p327, rails 4.0.0, and passenger 4.0.5. I can verify this by typing, "ruby -v", "rails -v", and "passenger -v" (also "gem -v"). I made sure to install these not as root, so that I don't get a 403 forbidden error when trying to deploy the app. I installed passenger by typing "gem install passenger", and then installed the nginx passenger module (into "/nginx") with "passenger-install-nginx-module". I am trying to run my rails app on a subdomain, http://development.thegraduate.hk (I am using the subdomain to show my client progress on the website). In bluehost I created that subdomain, and had it point to "public_html/thegraduate". I then created a symlink from "rails_apps/thegraduate/public" to "public_html/thegraduate" and verified that the symlink exists. The problem is: when I go to http://development.thegraduate.hk, I get a directory listing. There is nothing resembling a rails app. I have not added a .htaccess file to /rails_apps/thegraduate/public, as that was never specified in the installation of passenger. It was meant to be 'install and go'. When I type "passenger-memory-status", I get 3 things: - Apache processes (7) - Nginx processes (0) - Passenger processes (0) So it appears that nginx and passenger are not running, and I can't figure out how to get it to run (I'm not looking to have it run as a standalone server). Here is my nginx.conf file (/nginx/conf/nginx.conf): #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /home/thegrad4/.rbenv/versions/1.9.3-p327/lib/ruby/gems/1.9.1/gems/passenger-4.0.5; passenger_ruby /home/thegrad4/.rbenv/versions/1.9.3-p327/bin/ruby; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name development.thegraduate.hk; root ~/rails_apps/thegraduate/public; passenger_enabled on; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } I don't get any errors, just the directory listing. I've tried to be as detailed as possible. Any help on this issue would be greatly appreciated as I've been stumped for the past 3 days. Scouring the web has not helped as my issue seems to be specific to me. Thanks so much. If there are any potential details I forgot to specify, just ask. ** ADDITIONAL INFORMATION ** Going to development.thegraduate.hk/public/ will correctly display the index.html page in /rails_apps/thegraduate/public. However, changing root in the routes.rb file to "root = 'home#index'" does nothing.

    Read the article

  • Reverse lookup SERVFAIL

    - by Quan Tran
    I just set up a DNS server and a web server using Virtualbox. The IP address of the DNS server is 192.168.56.101 and the web server 192.168.56.102. Here are my configuration files for the DNS server: named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //query-source address * port 53; //forward first; forwarders { 8.8.8.8; 8.8.4.4; }; listen-on port 53 { 127.0.0.1; 192.168.56.0/24; }; allow-query { localhost; 192.168.56.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity debug 10; print-category yes; print-time yes; print-severity yes; }; }; zone "quantran.com" in { type master; file "named.quantran.com"; }; zone "56.168.192.in-addr.arpa" in { type master; file "named.192.168.56"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; named.quantran.com: $TTL 86400 quantran.com. IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) IN NS dns1.quantran.com. dns1.quantran.com. IN A 192.168.56.101 www.quantran.com. IN A 192.168.56.102 named.192.168.56: $TTL 86400 $ORIGIN 56.168.192.in-addr.arpa. @ IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) ; minimum IN NS dns1.quantran.com. 101.56.168.192.in-addr.arpa. IN PTR dns1.quantran.com. 102 IN PTR www.quantran.com. When I try a normal lookup from the host (I configured so that the only nameserver the host uses is the DNS server 192.168.56.101): quan@quantran:~$ host www.quantran.com www.quantran.com has address 192.168.56.102 quan@quantran:~$ host dns1.quantran.com dns1.quantran.com has address 192.168.56.101 But when I try a reverse lookup: quan@quantran:~$ host -v 192.168.56.101 192.168.56.101 Trying "101.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 101.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms quan@quantran:~$ host -v 192.168.56.102 192.168.56.101 Trying "102.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms So why can't I perform a reverse lookup? Anything wrong with the zone configuration files? Thanks in advance :) Oh, here is the output from the log file /var/named/data/named.run when I perform the reverse lookup: quan@quantran:~$ host 192.168.56.102 192.168.56.101 Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) /var/named/data/named.run: 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: UDP request 02-Jun-2014 15:18:11.950 client: debug 5: client 192.168.56.1#51786: using view '_default' 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: request is not signed 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: recursion available 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: query 02-Jun-2014 15:18:11.950 client: debug 10: client 192.168.56.1#51786: ns_client_attach: ref = 1 02-Jun-2014 15:18:11.950 query-errors: debug 1: client 192.168.56.1#51786: query failed (SERVFAIL) for 102.56.168.192.in-addr.arpa/IN/PTR at query.c:5428 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: error 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: send 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: sendto 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: senddone 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: next 02-Jun-2014 15:18:11.951 client: debug 10: client 192.168.56.1#51786: ns_client_detach: ref = 0 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: endrequest 02-Jun-2014 15:18:11.951 client: debug 3: client @0xb537e008: udprecv Also, I made some changes to the log section in named.conf.

    Read the article

  • Portable scripting language for a multi-server admin?

    - by Aaron
    Please Note: Portable as in portableapps.com, not the traditional definition. Originally posted on stackoverflow.com, asking here at another user's suggestion. I'm a DBA and sysadmin, mostly for Windows machines running SQL Server. I'm looking for a programming/scripting language for Windows that doesn't require Admin access or an installer, needing no install process other than expanding it into a folder. My intent is to have a language for automation on which I can standardize. Up to this point, I've been using a combination of batch files and Unix shell, using sh.exe from UnxUtils but it's far from a perfect solution. I've evaluated a handful of options, all of them have at least one serious shortcoming or another. I have a strong preference for something open source or dual license, but I'm more interested in finding the right tool than anything else. Not interested that anything that relies on Cygwin or Java, but at this point I'd be fine with something that needs .NET. Requirements: Manageable footprint (1-100 files, under 30 MB installed) Run on Windows XP and Server (2003+) No installer (exe, msi) Works with external pipes, processes, and files Support for MS SQL Server or ODBC connections Bonus Points: Open Source FFI for calling functions in native DLLs GUI support (native or gtk, wx, fltk, etc) Linux, AIX, and/or OS X support Dynamic, object oriented and/or functional, interpreted or bytecode compiled; interactive development Able to package or compile scripts into executables So far I've tried: Ruby: 148 MB on disk, 23000 files Portable Python: 54 MB on disk, 2800 files Strawberry Perl: 123 MB on disk, 3600 files REBOL: Great, except closed source and no MSSQL or ODBC in free version Squeak Smalltalk: Great, except poor support for scripting ---- cut: points of clarification ---- Why all the limitations? I realize some of my criteria seem arbitrarily confining. It's primarily a product my environment. I work as a SQL Server DBA and backup Unix admin at a division of a large company. In addition to near a hundred boxes running some version or another of SQL Server on Windows, I also support the SQL Server Express Edition installs on over a thousand machines in the field. Because of our security policies, I don't login rights on every machine. Often enough, an issue comes up and I'm given local Admin for some period of time. Often enough, it's some box I've never touched and don't have my own environment setup yet. I may have temporary admin rights on the box, but I'm not the admin for the machine- I'm just the DBA. I've no interest in stepping on the toes of the Windows admins, nor do I want to take over any of their duties. If I bring up "installing" something, suddenly it becomes a matter of interest for Production Control and the Windows admins; if I'm copying up a script, no one minds. The distinction may not mean much to the readers, but if someone gets the wrong idea I've suddenly got a long wait and significant overhead before I can get the tool installed and get the problem solved. That's why I want something that can be copied and run in the manner of a portable app. What about the small footprint? My company has three divisions, each in a different geographical location, and one of them is a new acquisition. We have different production control/security policies in each division. I support our MSSQL databases in all three divisions. The field machines are spread around the US, sometimes connecting to the VPN over very slow links. Installing Ruby \using psexec has taken a long time over these connections. In these instances, the bigger time waster seems to be archives with thousands and thousands of files rather than their sheer size. You could say I'm spoiled by Unix, where the admins usually have at least some modern scripting language installed; I'd use PowerShell, but I don't know it well and more importantly it isn't everywhere I need to work. It's a regular occurrence that I need to write, deploy and execute some script on short notice on some machine I've never on which logged in. Since having Ruby or something similar installed on every machine I'll ever need to touch is effectively impossible because of the approvals, time and and Windows admin labor needed I makes more sense find a solution that allows me to work on my own terms.

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

  • What do I do when I get a Linux kernel bug?

    - by raldi
    I just bought a tiny computer called a fit-pc2 which came with a somewhat customized Ubuntu 9.10 installation. uname -a reports: Linux 2.6.31-34-fitpc2 #7 SMP Thu Apr 22 17:43:26 IDT 2010 i686 GNU/Linux It seems that after several hours of running with heavy network load, all networking ceases and I get the following in kern.log: BUG: unable to handle kernel paging request at ff09dfc0 IP: [<c0150300>] kthread_should_stop+0x10/0x20 *pde = 00000000 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0150300>] EFLAGS: 00010246 CPU: 1 EIP is at kthread_should_stop+0x10/0x20 EAX: ff09dfc4 EBX: c180cbac ECX: 0109d000 EDX: f709df98 ESI: f709df98 EDI: c180cba0 EBP: f709dfb8 ESP: f709df90 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: c014c14d c180cba4 00000000 f7084b60 c0150770 f709dfa4 f709dfa4 f7023ef4 <0> c180cba0 c014c0d0 f709dfe0 c015047c 00000000 00000000 00000000 f709dfcc <0> f709dfcc c0150400 00000000 00000000 00000000 c0103ce7 f7023ef4 00000000 Call Trace: [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: a6 8b 55 0c 8d 4d e0 89 f8 89 34 24 e8 7a fd ff ff 89 c3 eb 92 90 90 90 90 90 90 55 64 a1 00 80 76 c0 8b 80 70 02 00 00 89 e5 5d <8b> 40 fc c3 8d b6 00 00 00 00 8d bf 00 00 00 00 55 ba d7 86 62 EIP: [<c0150300>] kthread_should_stop+0x10/0x20 SS:ESP 0068:f709df90 CR2: 00000000ff09dfc0 ---[ end trace 06004df70b9cf435 ]--- BUG: unable to handle kernel paging request at ff09dfc8 IP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 *pde = 00000000 Oops: 0002 [#2] SMP last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/idVendor Modules linked in: binfmt_misc ppdev sbc_fitpc2_wdt snd_usb_audio snd_usb_lib i2c_isch sch_gpio snd_seq_dummy snd_hda_intel snd_pcm_oss snd_seq_oss snd_seq_midi snd_rawmidi snd_mixer_oss snd_seq_midi_event snd_seq snd_pcm snd_timer snd_page_alloc snd_seq_device iptable_filter ip_tables x_tables snd_hwdep lpc_sch snd psmouse rt2860sta(C) uvcvideo video pl2303 soundcore mfd_core output videodev v4l1_compat lirc_igorplugusb lirc_dev serio_raw lp parport usbhid r8169 mii iegd_mod drm agpgart Pid: 16, comm: kblockd/1 Tainted: G D C (2.6.31-34-fitpc2 #7) SBC-FITPC2 EIP: 0060:[<c0521bc8>] EFLAGS: 00010086 CPU: 1 EIP is at _spin_lock_irqsave+0x18/0x30 EAX: 00000100 EBX: ff09dfc8 ECX: 00000286 EDX: ff09dfc8 ESI: f7084b60 EDI: ff09dfc4 EBP: f709dd88 ESP: f709dd88 DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 Process kblockd/1 (pid: 16, ti=f709c000 task=f7084b60 task.ti=f709c000) Stack: f709dda4 c0127c0b 00000082 00000001 ff09dfc4 f7084b60 00000000 f709ddd0 <0> c0137fd2 00000086 f70954c4 00000000 f7098480 f709ddf0 f7094fc0 f7084b60 <0> 00000000 00000009 f709ddf0 c013c3f8 00000001 c1807c60 f709ddf0 f7084b60 Call Trace: [<c0127c0b>] ? complete+0x1b/0x60 [<c0137fd2>] ? mm_release+0x52/0xf0 [<c013c3f8>] ? exit_mm+0x18/0x110 [<c013c6db>] ? do_exit+0xfb/0x2e0 [<c013998a>] ? print_oops_end_marker+0x2a/0x30 [<c0522aab>] ? oops_end+0x8b/0xd0 [<c011eac4>] ? no_context+0xb4/0xd0 [<c011eb1d>] ? __bad_area_nosemaphore+0x3d/0x1a0 [<c0133a56>] ? load_balance_newidle+0x96/0x320 [<c011ec92>] ? bad_area_nosemaphore+0x12/0x20 [<c0524106>] ? do_page_fault+0x2f6/0x380 [<c012cc30>] ? finish_task_switch+0x50/0xe0 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0522006>] ? error_code+0x66/0x70 [<c0523e10>] ? do_page_fault+0x0/0x380 [<c0150300>] ? kthread_should_stop+0x10/0x20 [<c014c14d>] ? worker_thread+0x7d/0xe0 [<c0150770>] ? autoremove_wake_function+0x0/0x40 [<c014c0d0>] ? worker_thread+0x0/0xe0 [<c015047c>] ? kthread+0x7c/0x90 [<c0150400>] ? kthread+0x0/0x90 [<c0103ce7>] ? kernel_thread_helper+0x7/0x10 Code: 00 00 00 55 89 e5 f0 83 28 01 79 05 e8 02 ff ff ff 5d c3 55 89 c2 89 e5 9c 58 8d 74 26 00 89 c1 fa 90 8d 74 26 00 b8 00 01 00 00 <f0> 66 0f c1 02 38 e0 74 06 f3 90 8a 02 eb f6 89 c8 5d c3 90 8d EIP: [<c0521bc8>] _spin_lock_irqsave+0x18/0x30 SS:ESP 0068:f709dd88 CR2: 00000000ff09dfc8 ---[ end trace 06004df70b9cf436 ]--- Fixing recursive fault but reboot is needed! This seems to happen at least once a day. How do I even begin to debug this?

    Read the article

  • How do I make solr/jetty find the installed slf4j jars in Ubuntu 12.04?

    - by J. Pablo Fernández
    I'm running Ubuntu 12.04's packaged Jetty in which I installed solr 4.3.1 (by copying the war file to /var/lib/jetty/webapps. When I start Jetty, I get this error: failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. The package libslf4j-java is installed, and the jars are in /usr/share/java: /usr/share/java/log4j-over-slf4j.jar /usr/share/java/slf4j-api.jar /usr/share/java/slf4j-jcl.jar /usr/share/java/slf4j-jdk14.jar /usr/share/java/slf4j-log4j12.jar /usr/share/java/slf4j-migrator.jar /usr/share/java/slf4j-nop.jar /usr/share/java/slf4j-simple.jar but somehow, Jetty and/or Solr are not finding them. How do I make them find them? or how do I install some other jars where jetty/solr would find them? The full error is: 88 [main] INFO org.mortbay.log - jetty-6.1.24 443 [main] INFO org.mortbay.log - Deploy /etc/jetty/contexts/javadoc.xml -> org.mortbay.jetty.handler.ContextHandler@cec0c5{/javadoc,file:/usr/share/jetty/javadoc} 522 [main] INFO org.mortbay.log - Extract file:/var/lib/jetty/webapps/solr.war to /var/cache/jetty/data/Jetty__8080_solr.war__solr__zdafkg/webapp 1501 [main] WARN org.mortbay.log - failed SolrRequestFilter: org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging 1501 [main] ERROR org.mortbay.log - Failed startup of context org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war} org.apache.solr.common.SolrException: Could not find necessary SLF4j logging jars. If using Jetty, the SLF4j logging jars need to go in the jetty lib/ext directory. For other containers, the corresponding directory should be used. For more information, see: http://wiki.apache.org/solr/SolrLogging at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:105) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:532) at java.lang.Class.newInstance0(Class.java:374) at java.lang.Class.newInstance(Class.java:327) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:92) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:662) at org.mortbay.jetty.servlet.Context.startContext(Context.java:140) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory at org.apache.solr.servlet.SolrDispatchFilter.<init>(SolrDispatchFilter.java:103) ... 36 more Caused by: java.lang.ClassNotFoundException: org.slf4j.LoggerFactory at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 37 more 1505 [main] WARN org.mortbay.log - failed org.mortbay.jetty.webapp.WebAppContext@5329c5{/solr,file:/var/lib/jetty/webapps/solr.war}: java.lang.NoClassDefFoundError: org/slf4j/Logger 1579 [main] WARN org.mortbay.log - failed ContextHandlerCollection@19d0a1: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] INFO org.mortbay.log - Opened /var/log/jetty/2013_06_27.request.log 1582 [main] WARN org.mortbay.log - failed HandlerCollection@cbf30e: java.lang.NoClassDefFoundError: org/slf4j/Logger 1582 [main] ERROR org.mortbay.log - Error starting handlers java.lang.NoClassDefFoundError: org/slf4j/Logger at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2454) at java.lang.Class.getMethod0(Class.java:2697) at java.lang.Class.getMethod(Class.java:1622) at org.mortbay.log.Log.unwind(Log.java:228) at org.mortbay.log.Log.warn(Log.java:197) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:475) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) at org.mortbay.jetty.Server.doStart(Server.java:224) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) Caused by: java.lang.ClassNotFoundException: org.slf4j.Logger at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:392) at org.mortbay.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:363) ... 29 more

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • OS Missing? Messed up the MBR on Win7 64-bit

    - by hom3lesshom3boy
    I have a Windows 7 machine with two hard drives: a 1TB C: drive and 500GB J:. I had Windows XP installed on C: and Windows 7 installed on J:. I installed Windows 7 after Windows XP from an installer .exe I (legally) bought and downloaded. It, and all of my other files, are sitting on my J: drive intact. While under my Windows 7 install, a few days ago I decided to use Priform's CCleaner and use its DriveWipe utility to wipe the C: drive. 1% into the process, I cancelled and attempted to use it again. It gives me an error saying it can't format the drive, so I poke around the Internet a bit, give up, and restart my computer. I first get an "OS is missing" error after the computer boots past the BIOS. I downloaded and put UBCD on a bootable USB to use another drivewiping tool to completely erase the C: drive, hoping it'll take the problem with it. No luck. I try to use TestDisk to make my J: my primary active drive, but no luck. I still get the "OS is missing" error. Or sometimes it'll hang at Verifying DMI Pool. Or sometimes I'll get the "NTLDR is missing" error. I get hold of Hiren's and put it on another bootable USB. I first I tried the Boot Windows 7 from Hard Drive option, and I get "Error 15: File Not Found". I tried the "Fix 'NTLDR is Missing'" option (I'm not quite sure why this is even showing up, since I'm trying to get into a HDD with Windows 7 installed. Probably messed up somewhere when I used TestDisk) and I get this list: I'll run through the error messages I get: 1st Try - Windows could not start because the following file is missing or corrupt: \system32\hal.dll 2nd Try - Windows could not start because the following file is missing or corrupt: \system32\ntoskrnl.exe 3rd Try - Windows could not start because of a computer disk hardware configuration problem. Could not read from the selected boot disk. Check boot path and disk hardware. 4th - 8th Try - Same as #3 9th Try - I/O Error accessing boot sector file multi(0)disk(0)fdisk(0)\BOOTSEC.DOS. And computer freezes. 10th Try - computer restarts Needless to say, not a single one of those works. I then tried to open up the Windows 7 exe I have sitting on my J: from the Mini-XP OS on Hiren's, but it won't run because I'm trying to run a 64-bit file from a 32-bit exe. At least, that's the problem according to these guys: http://social.technet.microsoft.com/...-b2f54e9c7d18/ I then borrowed a 64-bit Windows Home Premium CD from a friend to get to the recovery options. But I get the error message: This version of System Recovery Options is not compatible with the version of Windows you are trying to repair. Try using a recovery disc that is compatible with this version of Windows. I pressed Shift + F10 to get to the Command Prompt directly. These are the exact steps I took from there (paraphrased a little): X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. I restarted my computer, but it still didn't work. I unplugged the C: drive, then tried bootrec and Diskpart: X:\Sources> bootrec.exe X:\Sources> bootrec /RebuildBcd Total identified Windows installations: 1 [1] \\?\GLOBALROOT\Device\HarddiskVolume1\Windows Add installation to bootlist? Yes(Y)/No(N)/All(A):y The requested system device cannot be found. X:\Sources>DiskPart DISKPART> List Disk Disk # Status Size Free Dyn Gpt Disk 0_Online_465GB_0B_______* Disk 1 Online 1000MB 0B (this is Hiren's on a bootable usb) DISKPART> Select Disk 0 Disk 0 is now the selected disk. DISKPART> List Partition Partition # Type Size Offset Partition 1 System 465GB 31KB DISKPART> Select Partition 1 Partition 1 is now the selected partition DISKPART> Active The selected disk is not a fixed MBR disk. The ACTIVE command can only be used on fixed MBR disks. DISKPART> exit Leaving Diskpart... X:\Sources>bootrec /Fixmbr The operation completed successfully. X:\Sources>bootrec /Fixboot The operation completed successfully. Before I go any further, is there anything I'm overlooking/doing wrong? All I care about is making the J: and Windows 7 bootable again. SPECS: Windows 7 Professional 64-Bit GIGABYTE - Motherboard - Socket 775 - GA-P35-DS3R (rev. 2.1) Crucial Ballistix 2048MB PC6400 DDR2 800MHz (2x2GB) Intel Core 2 Duo E6700 Processor (2.6 (6GHZ) I think... not sure anymore C: HDD - SAMSUNG HD103UJ (1TB, not plugged in) J: HDD - WDC WD5000AKS-00V1A0 (500GB)

    Read the article

  • Need help identiying a nasty rootkit in Windows

    - by goofrider
    I have a nasty rootkit that not tools seem to be able to idenity. I know for sure it's a rootkit, but I can figure out which rootkit it is. Here's what I gathered so far: It creates multiple copies of itself in %HOME%\Local Settings\Temp with names like Q.EXE, IAJARZ.exe, etc., and install them as hidden services. These EXE have SysInternals identifiers in them so they're definitely rootkits. It hooked very deep in the system, including file read/write, security policies, registry read/write, and possibly WinSock/TCP/IP. When going to Sophos.com to download their software, the rootkit inject something called Microsoft Ajax Tootkit into the page, which injects code into the email submission form in order to redirect it. (EDIT: I might have panicked. Looks like Sophos does use an AJAZ email form, their form is just broken on Chrome so it looked like a mail form injection attack, the link is http://www.sophos.com/en-us/products/free-tools/virus-removal-tool/download.aspx ) Super-Antispyware found a lot of spyware cookies, in the name of .kaspersky.2o7.net, etc. (just chedk 2o7.net, looks like it's a legit ad company) I tried comparing DNS lookup from the infected systems and from system in other physical locations, no DNS redirections it seems. I used dd to copy the MBR and compared it with the MBR provided by ms-sys package, no differences so it's not infecting MBR. No antivirus or rootkit scanner be able to identify it. Most of them can't even find it. I tried scanning, in-situ (normal mode), in safe mode, and boot to linux live CD. Scanners used: Avast, Sophos anti rootkit, Kasersky TDSSKiller, GMER, RootkitRevealer, and many others. Kaspersky reported some unsigned system files that ought to be signed (e.g. tcpip.sys), and reported a number of MD5 mismatches. But otherwise couldn't identify anything based on signature. When running Sysinternal RootkitRevealer and Sophos AntiRootkit, CPU usage goes up to 100% and gets stucked. The Rootkit is blocking them. When trying running/installing HiJackThis, RootkitRevealer and some other scanners, it tells me system security policy prevent running/installing it. The list of malicious acitivities go on and on. here's a sample of logs from all my scans. In particular, aswSnx.SYS, apnenfno.sys and PROCMON20.SYS has a huge number of hooks. It's hard to tell if the rootkit replaced legit program files like aswSnx.SYS (from Avast) and PROCMON20.SYS (from Sysinternal Process Monitor). I can't find whether apnenfno.sys is from a legit program. Help to identify it is appreciated. Trend Micro RootkitBuster ------ [HIDDEN_REGISTRY][Hidden Reg Value]: KeyPath : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\sptd\Cfg Root : 586bfc0 SubKey : Cfg ValueName : g0 Data : 38 23 E8 D0 BF F2 2D 6F ... ValueType : 3 AccessType: 0 FullLength: 61 DataSize : 32 [HOOKED_SERVICE_API]: Service API : ZwCreateMutant Image Path : C:\WINDOWS\System32\Drivers\aswSnx.SYS OriginalHandler : 0x8061758e CurrentHandler : 0xaa66cce8 ServiceNumber : 0x2b ModuleName : aswSnx.SYS SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwCreateThread Image Path : c:\windows\system32\drivers\apnenfno.sys OriginalHandler : 0x805d1038 CurrentHandler : 0xaa5f118c ServiceNumber : 0x35 ModuleName : apnenfno.sys SDTType : 0x0 [HOOKED_SERVICE_API]: Service API : ZwDeleteKey Image Path : C:\WINDOWS\system32\Drivers\PROCMON20.SYS OriginalHandler : 0x80624472 CurrentHandler : 0xa709b0f8 ServiceNumber : 0x3f ModuleName : PROCMON20.SYS SDTType : 0x0 HiJackThis ------ O23 - Service: JWAHQAGZ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\JWAHQAGZ.exe O23 - Service: LHIJ - Sysinternals - www.sysinternals.com - C:\DOCUME~1\jeff\LOCALS~1\Temp\LHIJ.exe Kaspersky TDSSKiller ------ 21:05:58.0375 3936 C:\WINDOWS\system32\ati2sgag.exe - copied to quarantine 21:05:59.0217 3936 ATI Smart ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0342 3936 C:\WINDOWS\system32\BUFADPT.SYS - copied to quarantine 21:05:59.0856 3936 BUFADPT ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:05:59.0965 3936 C:\Program Files\CrashPlan\CrashPlanService.exe - copied to quarantine 21:06:00.0152 3936 CrashPlanService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0246 3936 C:\WINDOWS\system32\epmntdrv.sys - copied to quarantine 21:06:00.0433 3936 epmntdrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0464 3936 C:\WINDOWS\system32\EuGdiDrv.sys - copied to quarantine 21:06:00.0526 3936 EuGdiDrv ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:00.0604 3936 C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe - copied to quarantine 21:06:01.0181 3936 FLEXnet Licensing Service ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0321 3936 C:\Program Files\AddinForUNCFAT\UNCFATDMS.exe - copied to quarantine 21:06:01.0430 3936 OTFSDMS ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0492 3936 C:\WINDOWS\system32\DRIVERS\tcpip.sys - copied to quarantine 21:06:01.0539 3936 Tcpip ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0601 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - copied to quarantine 21:06:01.0664 3936 HKLM\SYSTEM\ControlSet003\services\TULPUWOX - will be deleted on reboot 21:06:01.0664 3936 C:\DOCUME~1\jeff\LOCALS~1\Temp\TULPUWOX.exe - will be deleted on reboot 21:06:01.0664 3936 TULPUWOX ( UnsignedFile.Multi.Generic ) - User select action: Delete 21:06:01.0757 3936 C:\WINDOWS\system32\Drivers\usbaapl.sys - copied to quarantine 21:06:01.0866 3936 USBAAPL ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:01.0913 3936 C:\Program Files\VMware\VMware Player\vmware-authd.exe - copied to quarantine 21:06:02.0443 3936 VMAuthdService ( UnsignedFile.Multi.Generic ) - User select action: Quarantine 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0443 3936 vmount2 ( UnsignedFile.Multi.Generic ) - User select action: Skip 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - skipped by user 21:06:02.0459 3936 vstor2 ( UnsignedFile.Multi.Generic ) - User select action: Skip

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • Moving a Drupal between linux servers, best practice to avoid file-ownership problems

    - by zero
    I want to port over a Drupal commons 6x24 from a local LAMP-stack to a production webserver. Both systems run OpenSuse Linux. How do I do this, what are the most important steps. How should I handle file-ownership. It's important for me to have to have full control of the file ownership. If I use the wwwrun account, I frequently run into problems, due to a very strict webserver-admin. See for example the long history of looking for fixes and solutions see this thread and even more interesting see this very long and impressive thread here. All troubles I run into have to do with file-owernship and permissions. This is my current setup; Note: This was just a quick hacked installation - quick and dirty. Well my interest is after the general options i have in the port of a drupal from linux to linux linux-vi17:/srv/www/htdocs/com624 # ls -l insgesamt 224 -rwxrwxrwx 1 root www 45285 19. Jan 00:54 CHANGELOG.txt -rwxrwxrwx 1 root www 925 19. Jan 00:54 COPYRIGHT.txt -rwxrwxrwx 1 root www 206 19. Jan 00:54 cron.php drwxrwxrwx 2 root www 4096 19. Jan 00:54 includes -rwxrwxrwx 1 root www 923 19. Jan 00:54 index.php -rwxrwxrwx 1 root www 1244 19. Jan 00:54 INSTALL.mysql.txt -rwxrwxrwx 1 root www 1011 19. Jan 00:54 INSTALL.pgsql.txt -rwxrwxrwx 1 root www 47073 19. Jan 00:54 install.php -rwxrwxrwx 1 root www 15572 19. Jan 00:54 INSTALL.txt -rwxrwxrwx 1 root www 14940 19. Jan 00:54 LICENSE.txt -rwxrwxrwx 1 root www 1858 19. Jan 00:54 MAINTAINERS.txt drwxrwxrwx 3 root www 4096 19. Jan 00:54 misc drwxrwxrwx 35 root www 4096 19. Jan 00:54 modules drwxrwxrwx 4 root www 4096 19. Jan 00:54 profiles -rwxrwxrwx 1 root www 1470 19. Jan 00:54 robots.txt drwxrwxrwx 2 root www 4096 19. Jan 00:54 scripts drwxrwxrwx 4 root www 4096 19. Jan 00:54 sites drwxrwxrwx 7 root www 4096 19. Jan 00:54 themes -rwxrwxrwx 1 root www 26250 19. Jan 00:54 update.php -rwxrwxrwx 1 root www 4864 19. Jan 00:54 UPGRADE.txt -rwxrwxrwx 1 root www 294 19. Jan 00:54 xmlrpc.php linux-vi17:/srv/www/htdocs/com624 # thx to BetaRides answer here a quick overview on the drush functionality with rsync http://drush.ws/ core-rsync Rsync the Drupal tree to/from another server using ssh. Examples: drush rsync @dev @stage Rsync Drupal root from dev to stage (one of which must be local). drush rsync ./ @stage:%files/img Rsync all files in the current directory to the 'img' directory in the file storage folder on stage. Arguments: source May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. destination May be rsync path or site alias. See rsync documentation and example.aliases.drushrc.php. Options: --mode The unary flags to pass to rsync; --mode=rultz implies rsync -rultz. Default is -az. --RSYNC-FLAG Most rsync flags passed to drush sync will be passed on to rsync. See rsync documentation. --exclude-conf Excludes settings.php from being rsynced. Default. --include-conf Allow settings.php to be rsynced --exclude-files Exclude the files directory. --exclude-sites Exclude all directories in "sites/" except for "sites/all". --exclude-other-sites Exclude all directories in "sites/" except for "sites/all" and the site directory for the site being synced. Note: if the site directory is different between the source and destination, use --exclude-sites followed by "drush rsync @from:%site @to:%site" --exclude-paths List of paths to exclude, seperated by : (Unix-based systems) or ; (Windows). --include-paths List of paths to include, seperated by : (Unix-based systems) or ; (Windows). Topics: docs-aliases Site aliases overview with examples Aliases: rsync

    Read the article

  • Windows 8, NVIDIA graphics recognition fails

    - by Roy Grubb
    I just installed Windows 8 Pro OEM 64-bit (clean install) and it won't properly recognize my graphics adapter. When I installed Win8, it automatically installed the BasicDisplay.sys driver dated 6/21/2006. 6.2.9200.16384 (win8_rtm.120725-1247). Hardware - Mobo:MSi G41M-P33 Combo CPU:Intel CoreDuo 6600 Graphics:NVIDIA GeForce 9400GT *OS* - Windows 8 Pro 64-bit OEM The graphics adapter worked fine in Windows XP. The PC is a generic box, bought locally and its mobo failed recently, so I replaced it with the G41M. Microsoft wouldn't let me re-activate Windows XP with a different mobo, so I installed Win8, which appears to work except as described next. Win8 only partially recognizes the graphics adapter and won't allow NVIDIA latest driver installer to see that it's an NVIDIA card. As a result, OpenGL doesn't work, and this is needed by the software I most use. Other than that the graphics look OK. When I say 'partially recognizes', I mean that via the Control Panel, I can see that the adapter is described as NVIDIA, but the driver remains stuck at Microsoft Basic Display Adapter no matter what I try, including "Update driver..." in adapter properties. Display Screen Resolution Advanced Settings Adapter shows: Adapter Type: Microsoft Basic Display Adapter Chip Type: NVIDIA DAC Type: NVIDIA Corporation Bios Information: G27 Board - p381n17 Don't know what this means ... no mention of 9400GT Total Available Graphics Memory: 256 MB Dedicated Video Memory: 0 MB In fact the adapter has 512MB on-board video memory. System Video Memory: 0 MB Shared System Memory: 256 MB And Control Panel Device Manager Display adapters just shows Microsoft Basic Display Adapter. No other graphics adapter, and no unknown device or yellow question mark. What I have tried so far: 1. Cleared CMOS and reset. Updated BIOS and all mobo drivers as follows: 1st I used Driver Reviver to see if any driver updates were required. It found some but I didn't use that to get the drivers. Then I switched to MSi's own mobo driver utility Live Update 5. This also showed the board needed to update several so I used it to fetch the new drivers. After that it showed that everything was up to date and I checked with Driver Reviver again, which also reported no drivers now needed updating. Rebooted. Went to the NVIDIA site to get the latest graphics adapter driver. Their auto-detect "Option 2: Automatically find drivers for my NVIDIA products" said "The NVIDIA Smart Scan was unable to evaluate your system hardware. Please use Option 1 to manually find drivers for your NVIDIA products." So I downloaded 310.70-desktop-win8-win7-winvista-64bit-international-whql.exe, which lists 9400 GT under supported products, but when I run it, it says: "NVIDIA Installer cannot continue This graphics driver could not find compatible graphics hardware." Connected the display to the on-board Intel graphics (G41 Intel Express), removed the NVIDIA card and rebooted, changed to internal graphics in CMOS. Again it installs the MS Basic Display Adapter, and can't properly run my s/w that needs OpenGL. It runs on other machines with Intel Express graphics (WinXP and 7) Shut down and pulled out the power cord. Held start button to discharge all capacitors. Removed and re-inserted NVIDIA adapter in PCI-E slot and made sure properly seated. Connected the monitor to the card, screwed plug to socket. Reconnected power cord. Started and checked in BIOS that Primary Graphics Adapter was set to PCI-E. Started Windows. Uninstalled MS Basic Display Adapter in Device Manager. Screen blanks briefly, reappears. No Graphics adapter entry was then visible in Device Manager. Restarted PC. MS Basic Display Adapter Visible again in Device Manager. Clicked in Device Manager View Show hidden devices. No other graphics adapter appears, no unknown devices. Rebooted. Tried Scan for Hardware changes. None detected. Tried right-click on MS Basic Display Adapter Properties Driver Update Driver... Search automatically. It replied that it had determined driver was up to date. I checked that there were no graphic driver-related entries in Programs and Features that I could delete (none). Searched for any other drivers with nvidia in their name and deleted them, just keeping the 306.97 installer exe file. Did a Windows Update. Ran GPU-Z which shows (main items): Microsoft Basic Display Adapter GPU G72 BIOS 5.72.22.76.88 Device ID 10DE - 01D5 DDR2 Bus Width 32 Bit Memory size 64MB Driver Version nvlddmkm 6.2.9200.16384 (ForceWare 0.00) / Win8 64 NVIDIA SLI Unknown in the drop-down at the foot, "Microsoft Basic Display Adapter" is the only option If I swap hard disks in that machine to one with a Ubuntu 10.4 installation (originally installed on the same PC), lspci shows "VGA compatible controller as NVIDIA Corporation Device 01d5 (rev a1) (prog-if 00 [VGA controller])" and "kernel driver in use: nvidia" I'm out of ideas for new things to try and would be really grateful of suggestions. Thanks!

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • PhpMyAdmin Hangs On MySQL Error

    - by user75228
    I'm currently running PhpMyAdmin 4.0.10 (the latest version supporting PHP 4.2.X) on my Amazon EC2 connecting to a MySQL database on RDS. Everything works perfectly fine except actions that return a mysql error message. Whether I perform "any" kind of action that will return a mysql error, Phpmyadmin will hang with the yellow "Loading" box forever without displaying anything. For example, if I perform the following command in MySQL CLI : select * from 123; It instantly returns the following error : ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '123' at line 1 which is completely normal because table 123 doesn't exist. However, if I execute the exact same command in the "SQL" box in Phpmyadmin, after I click "Go" it'll display "Loading" and stops there forever. Has anyone ever encountered this kind of issue with Phpmyadmin? Is this a bug or I have something wrong with my config.inc.php? Any help would be much appreciated. I also noticed these error messages in my apache error logs : /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open /opt/apache/bin/httpd: symbol lookup error: /opt/php/lib/php/extensions/no-debug-non-zts-20060613/iconv.so: undefined symbol: libiconv_open Below are my config.inc.php settings : <?php /* vim: set expandtab sw=4 ts=4 sts=4: */ /** * phpMyAdmin sample configuration, you can use it as base for * manual configuration. For easier setup you can use setup/ * * All directives are explained in documentation in the doc/ folder * or at <http://docs.phpmyadmin.net/>. * * @package PhpMyAdmin */ /* * This is needed for cookie based authentication to encrypt password in * cookie */ $cfg['blowfish_secret'] = 'something_random'; /* YOU MUST FILL IN THIS FOR COOKIE AUTH! */ /* * Servers configuration */ $i = 0; /* * First server */ $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'cookie'; /* Server parameters */ $cfg['Servers'][$i]['host'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = true; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['AllowNoPassword'] = false; $cfg['LoginCookieValidity'] = '3600'; /* * phpMyAdmin configuration storage settings. */ /* User used to manipulate with storage */ $cfg['Servers'][$i]['controlhost'] = '*.rds.amazonaws.com'; $cfg['Servers'][$i]['controluser'] = 'pma'; $cfg['Servers'][$i]['controlpass'] = 'password'; /* Storage database and tables */ $cfg['Servers'][$i]['pmadb'] = 'phpmyadmin'; $cfg['Servers'][$i]['bookmarktable'] = 'pma__bookmark'; $cfg['Servers'][$i]['relation'] = 'pma__relation'; $cfg['Servers'][$i]['table_info'] = 'pma__table_info'; $cfg['Servers'][$i]['table_coords'] = 'pma__table_coords'; $cfg['Servers'][$i]['pdf_pages'] = 'pma__pdf_pages'; $cfg['Servers'][$i]['column_info'] = 'pma__column_info'; $cfg['Servers'][$i]['history'] = 'pma__history'; $cfg['Servers'][$i]['table_uiprefs'] = 'pma__table_uiprefs'; $cfg['Servers'][$i]['tracking'] = 'pma__tracking'; $cfg['Servers'][$i]['designer_coords'] = 'pma__designer_coords'; $cfg['Servers'][$i]['userconfig'] = 'pma__userconfig'; $cfg['Servers'][$i]['recent'] = 'pma__recent'; /* Contrib / Swekey authentication */ // $cfg['Servers'][$i]['auth_swekey_config'] = '/etc/swekey-pma.conf'; /* * End of servers configuration */ /* * Directories for saving/loading files from server */ $cfg['UploadDir'] = ''; $cfg['SaveDir'] = ''; /** * Defines whether a user should be displayed a "show all (records)" * button in browse mode or not. * default = false */ //$cfg['ShowAll'] = true; /** * Number of rows displayed when browsing a result set. If the result * set contains more rows, "Previous" and "Next". * default = 30 */ $cfg['MaxRows'] = 50; /** * disallow editing of binary fields * valid values are: * false allow editing * 'blob' allow editing except for BLOB fields * 'noblob' disallow editing except for BLOB fields * 'all' disallow editing * default = blob */ //$cfg['ProtectBinary'] = 'false'; /** * Default language to use, if not browser-defined or user-defined * (you find all languages in the locale folder) * uncomment the desired line: * default = 'en' */ //$cfg['DefaultLang'] = 'en'; //$cfg['DefaultLang'] = 'de'; /** * default display direction (horizontal|vertical|horizontalflipped) */ //$cfg['DefaultDisplay'] = 'vertical'; /** * How many columns should be used for table display of a database? * (a value larger than 1 results in some information being hidden) * default = 1 */ //$cfg['PropertiesNumColumns'] = 2; /** * Set to true if you want DB-based query history.If false, this utilizes * JS-routines to display query history (lost by window close) * * This requires configuration storage enabled, see above. * default = false */ //$cfg['QueryHistoryDB'] = true; /** * When using DB-based query history, how many entries should be kept? * * default = 25 */ //$cfg['QueryHistoryMax'] = 100; /* * You can find more configuration options in the documentation * in the doc/ folder or at <http://docs.phpmyadmin.net/>. */ ?>

    Read the article

  • T60 Screen/LCD gets black after some minutes with a highpitched sound rising and fading

    - by edelwater
    Just now my T60 screen got "black" (so no display). On my second monitor: no problems so the VGA output works. Symptom: Screen blanks / no display, but it works on the second monitor Steps to reproduce: - boot - wait (it does not matter what you do you do not have to login or anything) - (now the monitor of the laptop slowly begins to make a ssssssssHHHHHHHHHHHHHHHHHWOEOEssssssss noise of about 10 seconds) - right after the sounds ends, the monitor gets black. Sometimes it seems to be the same each time. Software: Installed no new software before/after, running ZoneAlarm and antivirus. Other: It does not feel hot in any place, there don't seem to be running processes with strange behaviour. Warranty: Out of warranty What was I doing: Typing text on a website and doing some PHP coding in a text editor. What can I do here other than buy a new laptop? Does it sound familiar to known cases? Update 1: Exactly the same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-Screen-Blackout/m-p/288772 and the second poster (garyj), http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/Black-Screen-on-T60/m-p/235053#M48627 And here: "I have that same problem. I replaced the CCRL on mine and it works fine when the screen is not screwed in. Once the frame of the LCD screen (metal portion) touches the metal on the laptop which holds the screen the screen goes black. If the metal is touching the screen when you boot up it boots up with it being very dimmly lit. " from http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-screen-problems/m-p/205047#M44995 (it seems replacing the LCD display is no use, he tried it three times). Same problem: http://forums.lenovo.com/t5/T61-and-prior-T-series-ThinkPad/T60-black-screen/m-p/80604#M25914 Hmmm... not handy 3 or 4 months ago I ordered and installed a new fan. Now the LCD. Which does not seem the core issue but some electric issue so it seems replacing the LCD is not the thing to do here. If it is not the LCD that needs to be replaced (see other threads), which parts can I order to fix this? Is there any information which could lead me to identify the issue? I have read replacing the "inverter" AND the "backlightning" would that make sense? Update 2: I replaced the inverter with another inverter, but IO have the same problem. I DID notice that the inverter is the component that makes the sssssssssssssHHHHHHHHHH sound AND it becomes very hot in a few seconds. (So both the old and the test one) The problem is hmmm wat is then the thing that makes the inverter hot by (assumption) after which it shuts itself down. Is it either the input or the output? The output seems to me not, because the screen seems to function so it must be the electricity coming in. But what causes it to become so hot would it be the VGA card outputting some unusual high voltage seems unlikely? I am looking for the component to order / replace update 3: Great news. Ewendish gave me the hint to look in the BIOS. While I was in the BIOS I noticed that the screen did not switch off and there was not a high pitched sound. So I lowered some settings in the BIOS. I then noticed that with brightness turned to 0 (via FN End), it does not make a high pitched sound and does not turn off, with brightness turned up just three "stripes" it starts making the sound. So I could from now on work under lowest brightness modus or... see where the problem lies. So as stated below with either power management or display drivers / ATI Catalyst settings / Windows display settings. I'm trying to see where it lies, but I will google some first. Update 4: I wiped clean the Windows XP installation and installed Windows 7 on it. Unfortunately the problem remains: as soon as the brightness goes up the screen starts hissing. This means... back to original thought: it probably IS a hardware problem. Although ... again... if it is NOT the inverter, what is it? Could it be the backlightning component? I could try to switch that with a another T60... but this is quite tricky.

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • Citrix Xen VM's lose networking

    - by Ash
    My client has a XenServer 6.0.2 installation with 2 Window Server 2008 R2 virtual machines. Whenever the virtual machines are rebooted they lose their IP settings (IP address, subnet, gateway). Each time after a reboot I need to login to each VM via XenCenter and re-apply the required static IP settings. This causes issues with connected iSCSI drives within each VM - drives need to be reconnected after each reboot. For example, a network adapter has the following settings pre-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 IPv4 Address. . . . . . . . . . . : 10.101.0.101(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 10.101.0.10 DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Post-reboot: Description . . . . . . . . . . . : Citrix PV Ethernet Adapter #0 Physical Address. . . . . . . . . : C6-FB-A2-4F-2C-F3 Autoconfiguration IPv4 Address. . : 169.254.153.174(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 10.101.0.100 NetBIOS over Tcpip. . . . . . . . : Enabled Under XenCenter -- Virtual Network Interfaces, each adapter is set to a static MAC address (i.e. "Use this MAC address"). I have tried the following commands within one VM but this had no effect: netsh winsock reset catalog netsh int ip reset Can someone please help?

    Read the article

  • SignalR cannot read property client of undefined

    - by polonskyg
    I'm trying to add SignalR to my project (ASPNET MVC 4). But I can't make it work. In the below image you can see the error I'm receiving. I've read a lot of stackoverflow posts but none of them is resolving my issue. This is what I did so far: 1) Ran Install-Package Microsoft.AspNet.SignalR -Pre 2) Added RouteTable.Routes.MapHubs(); in Global.asax.cs Application_Start() 3) If I go to http://localhost:9096/Gdp.IServer.Web/signalr/hubs I can see the file content 4) Added <modules runAllManagedModulesForAllRequests="true"/> to Web.Config 5) Created folder Hubs in the root of the MVC application 6) Moved jquery and signalR scripts to /Scripts/lib folder (I'm not using jquery 1.6.4, I'm using the latest) This is my Index.cshtml <h2>List of Messages</h2> <div class="container"> <input type="text" id="message" /> <input type="button" id="sendmessage" value="Send" /> <input type="hidden" id="displayname" /> <ul id="discussion"> </ul> </div> @section pageScripts { <!--Reference the SignalR library. --> <script src="@Url.Content("~/Scripts/jquery.signalR-1.0.0-rc1.min.js")" type="text/javascript"></script> <!--Reference the autogenerated SignalR hub script. --> <script type="text/javascript" src="~/signalr/hubs"></script> <script src="@Url.Content("~/Scripts/map.js")" type="text/javascript"></script> } This is my IServerHub.cs file (located inside Hubs folder) namespace Gdp.IServer.Ui.Web.Hubs { using Microsoft.AspNet.SignalR.Hubs; [HubName("iServerHub")] public class IServerHub : Hub { public void Send(string name, string message) { Clients.All.broadcastMessage(name, message); } } } And this is map.js $(function () { // Declare a proxy to reference the hub. var clientServerHub = $.connection.iServerHub; // Create a function that the hub can call to broadcast messages. clientServerHub.client.broadcastMessage = function (name, message) { $('#discussion').append('<li><strong>' + name + '</strong>:&nbsp;&nbsp;' + message + '</li>'); }; // Get the user name and store it to prepend to messages. $('#displayname').val(prompt('Enter your name:', '')); // Set initial focus to message input box. $('#message').focus(); // Start the connection. $.connection.hub.start().done(function () { $('#sendmessage').click(function () { // Html encode display name and message. var encodedName = $('<div />').text($('#displayname').val()).html(); var encodedMsg = $('<div />').text($('#message').val()).html(); // Call the Send method on the hub. clientServerHub.server.send(encodedName, encodedMsg); // Clear text box and reset focus for next comment. $('#message').val('').focus(); }); }); }); The DLL's I see references for SignalR are: Microsoft.AspNet.SignalR.Core Microsoft.AspNet.SignalR.Owin Microsoft.AspNet.SignalR.SystemWeb Any ideas how to get it work? Should I make any change because the scripts are in /Script/lib folder? NOTE I'm following the instruction found here on how to set up Windsor Castle to make it work with SignalR, and again, seems that the proxy cannot be created and I'm getting the same error: Cannot read property client of undefined meaning that the proxy to the hub was not created This is how I have it in the server public class IncidentServerHub : Hub and like this in the client var clientServerHub = $.connection.incidentServerHub; Again, I can see the dynamically created file here: /GdpSoftware.Server.Web/signalr/hubs So, Why the proxy is not created? Thanks in advance!!! Guillermo.

    Read the article

  • Error Installing COM+ (Error Code: 80131501)

    - by Regina Foo
    I've written a class library that reads from an xml file and return the result as a string. But when I want to install it as a COM+ component, an error occurred (Error Code: 80131501). I checked the event log and the details of the error is: Installation of 'C:\Users\User\Documents\Visual Studio 2005\Projects\InteropSOA\InteropSOA\bin\Debug\InteropSOA.dll' into '{28E82165-AD74-4E16-90C9-0C5CE7DA97AA}' failed with an exception: System.EnterpriseServices.RegistrationException: FATAL: Could not find component 'InteropSOA.ConfigReader' we just installed. at System.EnterpriseServices.RegistrationDriver.InstallAssembly(RegistrationConfig regConfig, Object obSync) at System.EnterpriseServices.RegistrationHelper.InstallAssemblyFromConfig(RegistrationConfig& regConfig) at System.EnterpriseServices.RegistrationHelper.InstallAssembly(String assembly, String& application, String partition, String& tlb, InstallationFlags installFlags) at System.EnterpriseServices.Internal.ComManagedImportUtil.InstallAssembly(String asmpath, String parname, String appname) Below are the steps I've done while developing the class library: Added "System.EnterpriseServices" to Reference. Imported the reference to the class. Declared the class as "ServicedComponent". Set project properties ("Make assembly COM-visible" checked, "Register for COM Interop" checked, Signed the assembly with a strong key file name.) Here are my codes: using System; using System.Collections.Generic; using System.Text; using System.Xml; using System.Xml.XPath; using System.EnterpriseServices; namespace InteropSOA { public class ConfigReader : ServicedComponent { // xml file name private string strFileName; // type of request private string strRequest = ""; // response string private string strResponse = ""; // declarations for xPath private XPathDocument doc; private XPathNavigator nav; private XPathExpression expr; private XPathNodeIterator iterator; private XmlTextReader reader; private XmlDocument xmlDoc; public ConfigReader(string strFile, string request) { this.strFileName = strFile; this.strRequest = request; } public ConfigReader() { //default contructor } // reader for console program public void ReadXML() { doc = new XPathDocument(strFileName); nav = doc.CreateNavigator(); // compile xPath expression expr = nav.Compile("/Msg/" + strRequest + "/*"); iterator = nav.Select(expr); // interate on the node set try { while (iterator.MoveNext()) { XPathNavigator nav2 = iterator.Current.Clone(); strResponse += nav2.Value + "|"; } } catch (Exception ex) { Console.WriteLine(ex.Message); } strResponse = strResponse.Substring(0, strResponse.Length-1); Console.WriteLine("Response string = " + strResponse); } public void WriteXML(string strRequest, string strElement, string strValue) { reader = new XmlTextReader(strFileName); xmlDoc = new XmlDocument(); xmlDoc.Load(reader); reader.Close(); XmlNode node; XmlElement root = xmlDoc.DocumentElement; node = root.SelectSingleNode("/Msg/" + strRequest + "/" + strElement); node.InnerText = strValue; xmlDoc.Save(strFileName); } // reader for ASP.NET public string ReadXMLElement() { doc = new XPathDocument(strFileName); nav = doc.CreateNavigator(); // compile xPath expression expr = nav.Compile("/Msg/" + strRequest + "/*"); iterator = nav.Select(expr); // interate on the node set try { while (iterator.MoveNext()) { XPathNavigator nav2 = iterator.Current.Clone(); strResponse += nav2.Value + "|"; } } catch (Exception ex) { Console.WriteLine(ex.Message); } strResponse = strResponse.Substring(0, strResponse.Length - 1); return strResponse; } } }

    Read the article

  • How can i add a checkbox for optional files during install? (innosetup)

    - by Globulorozzo
    i'm trying to make a custom checkbox in my custom page (because is a one page installer), is needed only a checkbox without dialogs or anything, the installer that i'm trying to compile is very linear and simple. i want to bind "FILE3.EXE" on a checkbox in this way: if checkbox is checked copy the file (FILE3.EXE) in DestDir, otherwise if checkbox is unchecked skip the file (FILE3.EXE) during installation. This is the code that i use, obviously the checkbox code is missing because i'm not able to do that [Files] Source: FILE1.EXE; DestDir: {app}; Source: FILE2.EXE; DestDir: {app}; Source: FILE3.EXE; DestDir: {app}; //OPTIONAL [Code] procedure ExitProcess(uExitCode: UINT); external '[email protected] stdcall'; var MainPage : TWizardPage; FolderToInstall : TEdit; InstallLocation : String; procedure CancelClick(Sender: TObject); begin if ExitSetupMsgBox then begin ExitProcess(0); end; end; procedure BrowseClick(Sender : TObject); var Dir : String; begin Dir := FolderToInstall.Text; if BrowseForFolder('Browse',Dir,false) then FolderToInstall.Text := Dir; WizardForm.DirEdit.Text := Dir; end; procedure InitializeWizard(); var LabelFolder : TLabel; begin WizardForm.ClientWidth:=ScaleX(550); WizardForm.ClientHeight:=ScaleY(250); WizardForm.Center; WizardForm.OuterNotebook.Hide; WizardForm.InnerNotebook.Hide; WizardForm.Bevel.hide; Wizardform.NextButton.top := 180; Wizardform.NextButton.Left := 470; WizardForm.NextButton.Height := 25; WizardForm.StatusLabel.Parent := WizardForm; WizardForm.StatusLabel.Left := 6; WizardForm.StatusLabel.Top := 206; WizardForm.FilenameLabel.Parent := WizardForm; WizardForm.FilenameLabel.Left := 87; WizardForm.FilenameLabel.Top := 206; WizardForm.FilenameLabel.Width := ScaleX(383); WizardForm.ProgressGauge.Parent := WizardForm; WizardForm.ProgressGauge.Left :=5; WizardForm.ProgressGauge.Top := 222; WizardForm.ProgressGauge.Width := ScaleX(460); MainPage := CreateCustomPage(wpWelcome,'',''); LabelFolder := TLabel.Create(MainPage); LabelFolder.Parent := WizardForm; LabelFolder.Top := 164; LabelFolder.Left := 6; LabelFolder.Caption := 'Directory:' FolderToInstall := TEdit.Create(MainPage); FolderToInstall.Parent := WizardForm; FolderToInstall.Top := 182; FolderToInstall.Left := 85; FolderToInstall.Width := 380; FolderToInstall.Text := WizardDirValue; FolderToInstall.ReadOnly := True; end;

    Read the article

  • "(401)Authorization Required" when making a web service call using Axis

    - by Arun P Johny
    Hi, I'm using apache axis to connect to my sugar crm instance. When I'm trying to connect to the instance it is throwing the following exception Exception in thread "main" AxisFault faultCode: {http://xml.apache.org/axis/}HTTP faultSubcode: faultString: (401)Authorization Required faultActor: faultNode: faultDetail: {}:return code: 401 &lt;!DOCTYPE HTML PUBLIC &quot;-//IETF//DTD HTML 2.0//EN&quot;&gt; &lt;html&gt;&lt;head&gt; &lt;title&gt;401 Authorization Required&lt;/title&gt; &lt;/head&gt;&lt;body&gt; &lt;h1&gt;Authorization Required&lt;/h1&gt; &lt;p&gt;This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.&lt;/p&gt; &lt;/body&gt;&lt;/html&gt; {http://xml.apache.org/axis/}HttpErrorCode:401 (401)Authorization Required at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:744) at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at org.beanizer.sugarcrm.SugarsoapBindingStub.get_server_info(SugarsoapBindingStub.java:1115) at com.greytip.sugarcrm.GreytipCrm.main(GreytipCrm.java:42) This basically says that I do not have the authorization to the resource. The same code is working fine in my testing environment. Sugarsoap service = new SugarsoapLocator(); SugarsoapPortType port = service.getsugarsoapPort(new java.net.URL( SUGAR_CRM_LOCATION + "/soap.php")); System.out.println(port.get_server_info().getVersion()); User_auth userAuth = new User_auth(); userAuth.setUser_name("user_name"); MessageDigest md = MessageDigest.getInstance("MD5"); String password = getHexString(md.digest("password".getBytes())); userAuth.setPassword(password); // userAuth.setVersion("0.1"); Entry_value login = port.login(userAuth, "myAppName", null); String sessionID = login.getId(); Above code is used to connect to the Sugar CRM installation. here line "System.out.println(port.get_server_info().getVersion());" is throwing the exception. One difference I noticed between the test and production environment is when I used the soap url in the browser the production site pops up a 'Authentication Required' popup. When I gives my proxy username and password in this popup, it shows the soap request details. The same is applicable for the login url also. First it will ask for the 'Authentication' then it will take to the sugar crm login page? Is it a server security setting? If it is then how to set this user name and password using java in a web service call. The authentication required popup is same as the one which comes when we try to access the tomcat manager through a browser. Thanks

    Read the article

< Previous Page | 679 680 681 682 683 684 685 686 687 688 689 690  | Next Page >