Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 781/838 | < Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • OCZ Vertex 2 not recognized by Ubuntu installer

    - by Zsub
    As I boot into the Ubuntu 10.10 (or 11.04, doesn't matter) live environment or installer, it just refuses to recognise my Vertex 2. It reports the disk as ATA and not supporting smart, shows no serial number, and doesn't list the size correctly. All fdisk tells me is Unable to read /dev/sda (it's the only storage in the PC). I'm now running a temporary install of Windows 7 off of it, which worked like a charm, so where am I going wrong with Ubuntu... Specs: Asus M4N68T-M LE V2 (BIOS 0702, most recent) OCZ Vertex 2 SSD 60 GB Amd Athlon II X4 640 Patriot PSD34G13332 4GB DDR3 ram (two banks) EDIT I installed a second drive, installed Ubuntu on that and booted, it recognised the SSD just fine. I'm now trying to apt-get upgrade the live-environment. I wonder if there is any way to sort of install Ubuntu from Ubuntu (I boot into the working install on the other drive, install it on the SSD and then boot from the SSD). EDIT2 Ok, so that doesn't work. The install detects the SSD, however, it cannot format it. EDIT3 After a fresh boot I can read out SMART-data and even perform a read-benchmark, but if I try to format it, or do a write-bench, it'll crap out and after that it says SMART is not supported. So basically it seems I can't write to the disk, as it will stop working when I do, I will try to run repeated read-benchmarks to see if that has any effect. EDIT4 I'm running several read benchmarks on the drive right now, they give results that are to be expected from an SSD. If the read-benches don't fail, I can use fdisk on the disk, but it is now stuck trying to re-read the partition table after issueing the 'w' command. EDIT5 Parted Magic did recognize the drive and with hdparm -I even could tell me the drive was in a frozen state. I powercycled it (just pull out the plug from the SSD and plug it back in) and it wasn't frozen anymore. After that I could upgrade the firmware on the drive (still using Parted Magic) and format it to Ext4. After I rebooted into the Ubuntu installer, it wouldn't get recognized and hdparm didn't want to talk to it saying HDIO_DRIVE_CMD(identify) failed: Invalid exchange. EDIT6 For some reason if I enable one of the RAID controllers (the one the SSD is connected to, obviously) Ubuntu will let me format it, mount it and write to it. The installer also recognizes it. However if the raid controller is enabled but no array is defined the motherboard can't boot from it :(

    Read the article

  • Snmpd update interface counters slowly or something like this

    - by Korjavin Ivan
    I update one my freebsd box to 9-stable (totally new installation) and install net-snmp for monitoring. uname -r 9.1-PRERELEASE pkg_info net-snmp-5.7.1_7 Information for net-snmp-5.7.1_7: Comment: An extendable SNMP implementation .... cat /var/db/ports/net-snmp/options # This file is auto-generated by 'make config'. # Options for net-snmp-5.7.1_7 _OPTIONS_READ=net-snmp-5.7.1_7 _FILE_COMPLETE_OPTIONS_LIST= IPV6 MFD_REWRITES PERL PERL_EMBEDDED PYTHON DUMMY TKMIB DMALLOC MYSQL AX_SOCKONLY UNPRIVILEGED OPTIONS_FILE_UNSET+=IPV6 OPTIONS_FILE_UNSET+=MFD_REWRITES OPTIONS_FILE_SET+=PERL OPTIONS_FILE_SET+=PERL_EMBEDDED OPTIONS_FILE_UNSET+=PYTHON OPTIONS_FILE_SET+=DUMMY OPTIONS_FILE_UNSET+=TKMIB OPTIONS_FILE_SET+=DMALLOC OPTIONS_FILE_UNSET+=MYSQL OPTIONS_FILE_UNSET+=AX_SOCKONLY OPTIONS_FILE_UNSET+=UNPRIVILEGED I have about 500 vlan on this machine, and collect info about interface through snmpd to 2 different software, zabbix and cacti. And both of them plot the graphs with blank fields. I tryed change polling time in zabbix, from 15, sec to 30,60,90,120,10. And anyway i have blank fields. snmpd.conf is empty - only a access controls. This configuration worked fine on freebsd 8. Where is my fault? How fix this graphs? UPD: Changing pooling time, switch off one of agent, doesnt help. I look at zabbix log (recieved data from snmpd) and see that: sorry for russian locale, just look at numbers: and thats is not true, as my "iftop" show speed was about 90Mbits, but snmpd return 2Mbits. I understand that snmpd doesnt return speed, it return just a counter. But how its possible? why 2Mbit/s ? I tryed recompile snmpd with 64-bit counters, and without it. In both variants this blank fields present. So i think its my OS (freebsd) doesnt update interface counters well. I still collect tcpdump for found this request/response. But have problem with that, to much trash. UPD2: I decrypt tcpdump-ed file, and public this as google doc at gdocfile Timediff looks strange.. Like zabbix sometimes "forget" do request, and then do twice at row, ehh UPD3: I parse log from command "while true; do netstat -bin -I vlan4008 /var/log/netstat; sleep 300; done" and load as google docs, and add formula for speed : link Looks like all counters in OS are good. Now i think problem in : 1. zabbix get request twice at row (and what about cacti) 2. snmpd use counter32

    Read the article

  • emerge only prints it's parameters along with "Wrong gcc version" message.

    - by Dmitriy Matveev
    Our gentoo server has been left in inconsistent state. I don't know what have been done wrong previously, but now I need to fix the system somehow. I've tried to do revdep-rebuild, but it has failed: ... x11-libs/gksu:0 x11-libs/gtk+:2 x11-libs/gtkglarea:2 x11-libs/libgksu:2 x11-libs/libsvg-cairo:0 x11-libs/qt-gui:4 .......... IMPORTANT: 12 news items need reading for repository 'gentoo'. Use eselect news to read news items. Calculating dependencies... done! emerge: there are no ebuilds to satisfy "gnome-base/gswitchit-plugins:0". emerge: searching for similar names... emerge: Maybe you meant any of these: gnome-base/gswitchit-plugins, gnome-extra/gswitchit-plugins, gnome-base/nautilus? IMPORTANT: 12 news items need reading for repository 'gentoo'. Use eselect news to read news items. revdep-rebuild failed to emerge all packages. you have the following choices: If emerge failed during the build, fix the problems and re-run revdep-rebuild. Use /etc/portage/package.keywords to unmask a newer version of the package. (and remove 5_order.rr to be evaluated again) Modify the above emerge command and run it manually. Compile or unmerge unsatisfied packages manually, remove temporary files, and try again. (you can edit package/ebuild list first) To remove temporary files, please run: rm /var/cache/revdep-rebuild/*.rr I've tried to remove one of the mentioned packages: harley ~ # emerge -C gswitchit-plugins Wrong gcc version = echo -C gswitchit-plugins harley ~ # I don't see any problems with the gcc, but emerge isn't working: harley ~ # gcc --version gcc (Gentoo 4.5.2 p1.0, pie-0.4.5) 4.5.2 Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. harley ~ # gcc-config -l [1] i686-pc-linux-gnu-3.3.6 [2] i686-pc-linux-gnu-3.4.6 [3] i686-pc-linux-gnu-3.4.6-hardened [4] i686-pc-linux-gnu-3.4.6-hardenednopie [5] i686-pc-linux-gnu-3.4.6-hardenednopiessp [6] i686-pc-linux-gnu-3.4.6-hardenednossp [7] i686-pc-linux-gnu-4.1.2 [8] i686-pc-linux-gnu-4.5.2 * harley ~ # emerge --help Wrong gcc version = echo --help harley ~ # which emerge /root/bin/emerge harley ~ # emerge Wrong gcc version = echo harley ~ # emerge fdslkgj Wrong gcc version = echo fdslkgj harley ~ # How can I fix emerge?

    Read the article

  • Why doesn't Apache start from xampp control panel after changes to vhosts config?

    - by Grafica
    I'm running xampp on my local server, and want to host multiple sites, so I changed the httpd-vhosts.conf file. Will somebody let me know if there is something wrong with my code? Apache was running while I had only one site in the config, but after I added another site, I stopped apache, and I'm not able to restart it. # # Virtual Hosts # # If you want to maintain multiple domains/hostnames on your # machine you can setup VirtualHost containers for them. Most configurations # use only name-based virtual hosts so the server doesn't need to worry about # IP addresses. This is indicated by the asterisks in the directives below. # # Please see the documentation at # <URL:http://httpd.apache.org/docs/2.2/vhosts/> # for further details before you try to setup virtual hosts. # # You may use the command line option '-S' to verify your virtual host # configuration. # # Use name-based virtual hosting. # ##NameVirtualHost *:80 # # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for all requests that do not # match a ServerName or ServerAlias in any <VirtualHost> block. # ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host.localhost" ##ServerName dummy-host.localhost ##ServerAlias www.dummy-host.localhost ##ErrorLog "logs/dummy-host.localhost-error.log" ##CustomLog "logs/dummy-host.localhost-access.log" combined ##</VirtualHost> ##<VirtualHost *:80> ##ServerAdmin [email protected] ##DocumentRoot "C:/xampp/htdocs/dummy-host2.localhost" ##ServerName dummy-host2.localhost ##ServerAlias www.dummy-host2.localhost ##ErrorLog "logs/dummy-host2.localhost-error.log" ##CustomLog "logs/dummy-host2.localhost-access.log" combined ##</VirtualHost> NameVirtualHost * <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName localhost </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs" ServerName evamagnus.com <Directory "C:\xampp\htdocs\"> Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *> DocumentRoot "C:\xampp\htdocs2\" ServerName mygrafica.com <Directory "C:\xampp\htdocs2\"> Order allow,deny Allow from all </Directory> </VirtualHost> Here is what it says in the control panel: 2:17:37 PM [apache] Starting apache service... 2:17:38 PM [apache] Status change detected: running 2:17:39 PM [apache] Status change detected: stopped Thanks in advance.

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • Flash was "not designed to function across LANs". Any workarounds?

    - by Triynko
    See: http://helpx.adobe.com/flash/kb/problems-using-flash-authoring-across.html Issue When using Adobe Flash across a local area network (LAN) and networked drives/folders, you may experience any of the following problems:" Flash crashes while performing a test movie on FLA files located on a networked drive or folder. FLA files get corrupted when opening from or saving to networked drives or folder. Flash does not reflect changes in custom class after compiling. Flash, Flash Video Encoder, or Adobe Media Encodercrashes or corrupts Flash Video (FLV) files while encoding source located on networked drives or folder. Flash Video Encoder or Adobe Media Encoder crashes or corrupts FLV files where the output folder is a networked drive or folder. Published Flash Player (SWF) files and projectors are unable to load content located on networked drives or folder. More than one instance of a SWF or Projector on client machines cannot play back FLV files located on a networked drive or folder. Reason The Adobe Flash IDE, FLV Encoder, Adobe Media Encoderand Flash Player were not designed to function across LANs. Solution Use of Flash files across local networks is not supported in any context. Published content should access data through a web server. All file sources should be opened and saved on the local system. Using Flash in such a scenario for project collaboration or content deployment is highly discouraged and may corrupt your source files. If you need to work in a collaborative environment or store source files on a server, use the project panel and/or a third-party version control system. SERIOUSLY? I cannot work on files located on a mapped network drive? How did they mess that one up? Does the Flash IDE really open the source file and wipe it clean to do the saving, rather than saving a copy first then replacing it as an atomic file system operation? How hard would it be for them make a dummy temporary file for saving then issue a MOVE command? Any workarounds for this, like something that can make a network drive as stable as a local drive, like some kind of automatic local caching and synching?

    Read the article

  • How to install 32-bit libraries using Debian Testing

    - by bgoodr
    Question: What is the way to determine, ahead of time and without doing a full install of 64-bit Debian Testing NETINST, when Debian Testing has 32-bit libraries available and fully working and installable so that the following command works without broken package errors?: apt-get install ia32-libs ia32-libs-gtk The errors that occur when 32-bit libraries are not available, still in some broken state, or whatever is broken are detailed below. I already have concluded that "Just install Stable" is my stop-gap measure for now, but I would like to know the answer to the above question so as to avoid a lengthy installation process only to run into these problems at the very end. Details: I downloaded the 64-bit Debian Testing netinst a couple of days ago. This was "Jessie" built 20131014-06:07 via http://tinyurl.com/lejpa. This is weekly testing build. Yes, I know I should expect problems, and I did. I managed to get it completely installed and was able to invoke into GNOME, but not get past the 32-bit library problem. The problems starts when I attempt to install the 32-bit libraries via: apt-get install ia32-libs ia32-libs-gtk that returns: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 but it is not installable ia32-libs-gtk : Depends: ia32-libs-i386 but it is not installable Depends: ia32-libs-gtk-i386 but it is not installable E: Unable to correct problems, you have held broken packages. I then found an old (2012 is old to me) answer at ia32-libs : Depends: ia32-libs-i386 but it is not installable and even tried what they suggested there which was dpkg --add-architecture i386 apt-get update After executing the above, I tried again but got: root@breath:~# apt-get install ia32-libs ia32-libs-gtk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-i386 ia32-libs-gtk : Depends: ia32-libs-i386 E: Unable to correct problems, you have held broken packages. root@breath:~# And then tried this: root@breath:~# dpkg --get-selections | grep hold And that returned nothing. Not only is there broken packages, the system doesn't even know what packages are broken, so Debian Stable is my only solution I know of right now. Hence my question above.

    Read the article

  • How to configure nginx so it works with Express?

    - by Michal Stefanow
    I'm trying to configure nginx so it proxy_pass requests to my node apps. Question on StackOverflow got many upvotes: http://stackoverflow.com/questions/5009324/node-js-nginx-and-now and I'm using config from there. (but since question is about server configuration it is supposed to be on ServerFault) Here is the nginx configuration: server { listen 80; listen [::]:80; root /var/www/services.stefanow.net/public_html; index index.html index.htm; server_name services.stefanow.net; location / { try_files $uri $uri/ =404; } location /test-express { proxy_pass http://127.0.0.1:3002; } location /test-http { proxy_pass http://127.0.0.1:3003; } } Using plain node: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(3003, '127.0.0.1'); console.log('Server running at http://127.0.0.1:3003/'); It works! Check: http://services.stefanow.net/test-http Using express: var express = require('express'); var app = express(); // app.get('/', function(req, res) { res.redirect('/index.html'); }); app.get('/index.html', function(req, res) { res.send("blah blah index.html"); }); app.listen(3002, "127.0.0.1"); console.log('Server running at http://127.0.0.1:3002/'); It doesn't work :( See: http://services.stefanow.net/test-express I know that something is going on. a) test-express is NOT running b) text-express is running (and I can confirm it is running via command line while ssh on the server) root@stefanow:~# service nginx restart * Restarting nginx nginx [ OK ] root@stefanow:~# curl localhost:3002 Moved Temporarily. Redirecting to /index.html root@stefanow:~# curl localhost:3002/index.html blah blah index.html I tried setting headers as described here: http://www.nginxtips.com/how-to-setup-nginx-as-proxy-for-nodejs/ (still doesn't work) proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; I also tried replacing '127.0.0.1' with 'localhost' and vice versa Please advise. I'm pretty sure I miss some obvious detail and I would like to learn more. Thank you.

    Read the article

  • JavaMail application won't send email to external SMTP server

    - by Luiz Cruz
    This is actually a question from an exam, but I believe it could help others troubleshooting a similar situation. In a system, an e-mail needs to be sent to a certain mailbox. The following Java code, which is part of a larger system, was developed for that. Assume that "example.com" corresponds to a valid registered internet domain. public void sendEmail(){ String s1=”Warning”; String b1=”Contact IT support.”; String r1=”[email protected]”; String d1=”[email protected]”; String h1=”mx.intranet”; Properties p1 = new Properties(); p1.put(“mail.host”, h1); Session session = Session.getDefaultInstance(p1, null); MimeMessage message = new MimeMessage(session); try { message.setFrom(new InternetAddress(r1)); message.addRecipient(Message.RecipientType.TO, new InternetAddress(d1)); message.setSubject(s1); message.setText(b1); Transport.send(message); } catch (MessagingException e){ System.err.println(e); } } The execution of this code, within the testing environment of an application server, does NOT work as expected. The mailbox of the "example.com" server never receives the email, even tough all string values in the code are correctly attributed. The output for the command "netstat -np TCP" in the application server during execution is shown bellow: Src Add Src Port Dest Add Dest Port State 192.168.5.5 54395 192.168.7.1 25 SYN_SENT 192.168.5.5 54390 192.168.7.1 110 TIME_WAIT 192.168.5.5 52001 200.218.208.118 80 CLOSE_WAIT 192.168.5.5 52050 200.218.208.118 80 ESTABLISHED 192.168.5.5 50001 200.255.94.202 25 TIME_WAIT 192.168.5.5 50000 200.255.94.202 25 ESTABLISHED With the exception of the lines that were NAT'd, all others are associated with the Java application server, which created them after the execution of the code above. The e-mail server used in this environment is the production server, which is online and does not require any authentication for internal connections. Based on this situation, point out three possible causes for the problem.

    Read the article

  • Windows 7 remains powered on when restarting

    - by BombDefused
    I'm running windows 7 x64 on an MSI P67A-GD53 motherboard, in an Antec P280 Super Midi Towercase with a Corsair 650w PSU. I've just installed a second instance of windows 7 x64 on a separate disk (this is to keep my games separate from my work OS). The problem is that it appears now that I cannot restart from either instance of Windows 7. The shut down command, and sleep commands work as expected. When I try to restart, the shutdown happens but the system never reboots. Everything remains powered on, until I hold down the power button to force the power off. Ithink (but am not 100% sure) this has only started since I installed the second OS, and am assuming this has something to do with the motherboard needing to know which OS to run up again? Some other forums I've read suggest that the PSU has a major role in restart and could be at fault. Changing the boot order of the disks in the BIOS does not change anything. Any suggestions greatfully recieved! Update: I now have a reproduceable issue: I think the secondary OS install may have been a red herring. It was when windows tried to reboot during the install that I noticed the issue. After playing around with installing drivers, and rebooting many many times, I have found that it is the OC genie setting on the MSI motherboard that seems to trigger the problem. This makes sense as I only started using the OC genie feature a couple of weeks ago, and probably hadn't used restart in that time. However... simply turning off OC genie does not make the issue go away. I have to turn off OC genie, shutdown, start enter bios, go to the "Save and Exit" menu "Restore Defaults" yes to "Load optimized defaults", which will reset to clear the problem. Now when the PC boots into windows, I can restart as normal (and from the OS on either HDD). I only know how to control the issue, and don't still know the root cause. I'd like to be able to use the OC genie function if anyone can suggest a why I'm seeing this problem. Could it be that I'm drawing too much power when using OC feature?

    Read the article

  • Ubuntu server 10.04 doesn't boot into installed Gnome desktop automatically

    - by Tong Wang
    I've installed Ubuntu server 10.04 and then installed Gnome desktop on top of it, because I am new to Linux and its command line, I need the GUI desktop to help me get around. However, the problem I got is that the server doesn't boot into the GUI desktop when powered on. It's booting into a shell like this: Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enought?) - check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapper/cecdata-root does not exist. Dropping to a shell! BusyBox v1.13.3 (Ubuntu 1:1.13.3-1ubuntu11) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) result of (cat /proc/cmdline) BOOT_IMAGE=/vmlinuz-2.6.32-28-server root=/dev/mapper/cecdata-root ro quiet Then I have type "exit" to exit the shell and then it boots into Gnome. Any idea what's wrong? Edit: add output for the following commands wt@cecdata:~$ ls /dev/mapper/ cecdata-root cecdata-swap_1 control wt@cecdata:~$ fdisk -l wt@cecdata:~$ wt@cecdata:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/cecdata-root / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda1 during installation UUID=1635be41-d025-405e-b4a3-6f0abedb7aab /boot ext2 defaults 0 2 /dev/mapper/cecdata-swap_1 none swap sw 0 0 wt@cecdata:~$ Adding output for lsmod wt@cecdata:~$ lsmod Module Size Used by fbcon 39270 71 tileblit 2487 1 fbcon font 8053 1 fbcon bitblit 5811 1 fbcon softcursor 1565 1 bitblit dell_wmi 2177 0 dcdbas 6918 0 vga16fb 12757 1 vgastate 9857 1 vga16fb psmouse 64576 0 serio_raw 4950 0 power_meter 9473 0 bnx2 72874 0 lp 9336 0 parport 37160 1 lp mptsas 50592 2 usbhid 41116 0 mptscsih 37167 1 mptsas hid 83568 1 usbhid mptbase 91674 2 mptsas,mptscsih scsi_transport_sas 33021 1 mptsas

    Read the article

  • Issue with Netgear GS108T Managed Switch and Jumbo Frames

    - by Richie086
    I recently purchased a Netgear GS108T managed switch and I am trying to configure jumbo packets between my NAS (Thecus N4100Pro), PC and managed switch. I should mention the fact that I was able to use jumbo frames between my PC and NAS before I purchased the switch without issue. My Desktop has a wired gigabit NIC (Intel 82579V Gigabit) and has the ability to configure jumbo frames (see pic) that are either 9014 bytes or 4088 bytes. I choose 9014 bytes for the jumbo frame size My NAS supports jumbo frames as well, and is configured to use 9014 as the frame size. When I go into my Netgear managed switch and set the frame size to 9014 on the ports I am using for my PC and NAS. See image As soon as I hit apply in the web interface, I loose my connection to the SMB shares on my NAS and I can no longer connect to the web admin interface for my NAS. The really strange thing is I can ping my NAS via the ping command, but when I try to connect to the web interface on port 80 or port 443 the page never loads. I did a scan from my PC to my NAS using nmap and I can see the following ports open PORT STATE SERVICE 22/tcp open ssh 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 443/tcp open https 445/tcp open microsoft-ds 631/tcp open ipp 2000/tcp open cisco-sccp 2049/tcp open nfs 3260/tcp open iscsi 49152/tcp open unknown MAC Address: 00:14:FD:15:00:44 (Thecus Technology) Read data files from: C:\Program Files (x86)\Nmap Nmap done: 1 IP address (1 host up) scanned in 211.97 seconds Raw packets sent: 1 (28B) | Rcvd: 1 (28B) Anyone have any idea what is going on here? Why is nmap able to detect the ports are open and listening for http, https and file sharing but I cant connect when all devices have jumbo packets enabled? Stranger still - I did a packet capture using wireshark while the nmap scan was running and filtered so I only saw converstations between my PC and my NAS. Here are the packet details from my scan Only 4 packets over 5k bytes? What is going on here? Do I not need to configure jumbo frame sizes on the switch? I have an internet connection from my pc to the switch to my router - I just cannot connect to my NAS. I just checked on my iPhone and I am able to open my NAS web admin interface without issue on my iPhone! WTF!!!!!! Let me know if you need more details..

    Read the article

  • PowerShell remove force

    - by mausch
    Trying to delete a directory recursively with rm -Force -Recurse somedirectory, I get several "The directory is not empty" errors. If I retry the same command, it succeeds. Example: PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data\RunTime: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (RunTime:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests\Data: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Data:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\FileHelpers.Tests: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (FileHelpers.Tests:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit\_svn: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (_svn:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs\nunit: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (nunit:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers\Libs: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (Libs:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand Remove-Item : Cannot remove item I:\Documents and Settings\m\My Documents\prg\net\FileHelpers: The directory is not empty. At line:1 char:3 + rm <<<< -Force -Recurse .\FileHelpers + CategoryInfo : WriteError: (I:\Documents an...net\FileHelpers:DirectoryInfo) [Remove-Item], IOException + FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand PS I:\Documents and Settings\m\My Documents\prg\net> rm -Force -Recurse .\FileHelpers PS I:\Documents and Settings\m\My Documents\prg\net> Of course, this doesn't happen always. Also, it doesn't happen only with _svn directories, and I don't have TortoiseSVN cache or anything like that so nothing is blocking the directory. Any ideas?

    Read the article

  • kill -9 doesn't work

    - by Daniel
    I have a server with 3 oracle instances on it, and the file system is nfs with netapp. After shutdown the databases, one process for each database doesn't quit for a long time. Each kill -i doesn't work. I tried to truss, pfile it, the command through error. And iostat shows there are lots of IO to the netapp server. So someone said the process was busy writing data to remote netapp server, and before the write complete, it won't quit. So what need to be done was just wait until all the IO was done. After wait for longer time (about 1.5 hours), the processes exit. So my question is: how can a process ignore the kill signal? As far as I know, if we kill -9, it will stop immediately. Do you encounter such situation kill -i doesn't kill the process right away? TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1469 25053 0 22:36:53 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1493 25053 0 22:37:07 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 471 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 oracle 1495 25053 0 22:37:22 pts/1 0:00 grep dbw0 TEST7-stdby-phxdbnfs11$ ps -ef|grep smon oracle 1524 25053 0 22:38:02 pts/1 0:00 grep smon TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1526 25053 0 22:38:06 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ kill -9 1051 471 26795 TEST7-stdby-phxdbnfs11$ ps -ef|grep dbw0 oracle 1528 25053 0 22:38:19 pts/1 0:00 grep dbw0 oracle 26795 1 0 21:55:23 ? 0:00 ora_dbw0_TEST7 oracle 1051 1 0 Apr 08 ? 3958:51 ora_dbw0_TEST2 oracle 471 1 0 Apr 08 ? 6391:43 ora_dbw0_TEST1 TEST7-stdby-phxdbnfs11$ truss -p 26795 truss: unanticipated system error: 26795 TEST7-stdby-phxdbnfs11$ pfiles 26795 pfiles: unanticipated system error: 26795

    Read the article

  • Linux Debian Security Breach - what now? [closed]

    - by user897075
    Possible Duplicate: My server's been hacked EMERGENCY I installed Debian (Squeeze) a while back in my home network to host some personal sites (thank god). During the installation it prompted me to enter a user other than root - so in a rush I used my name as user and pass (alex/alex for what its worth). I know it's horrible practice but during the setup of this server I'm always logged in as root to perform configurations, etc. Few days or a week passes and I forget to change the password. Then I finally get my web site finished and I open the port forwarding on my router and DynDNS to point to my server in my home. I've done this many times in the past never had issues but I use a cryptic root password and I guess disabled regular accounts. Today I reformat my Windows 7 and after spending all day tweaking and updating SP1 I look for cloning apps and find clonezilla and see it supports SSH cloning, so I go through the process only to discover I need a user, so I log into my web-server and see I have the user 'alex' already in and realize I don't know the password. So I change the password to something cryptic and visit the directory 'home' only to realize their are contents such as passfile, bengos, etc. My heart sinks, I've been hacked!!! Sure as hell there are all sort of scripts and password files. I run a 'last' command and it seems they last logged in april 3rd. Question: What can I do to see if they did anything destructive? Should I reformat and reinstall? How restrictive is Debian/Squeeze in terms of user permissions out of the box - all my personal website stuff was created using 'root' so changing files does not seem to have occured. How did they determine there was a user 'alex' on the machine? Can you query any machine and figure this out? What the users are? Looks like they tried to run a IP scan...other nodes on the network are running Windows 7. One of which seems a little wonky as of late - is it possible they buggered up that system? What corrective action can I take to avoid this from happening again? And figure out what might have changed or been hacked? I'm hoping debian out of box is fairly secure and at best he managed to read some of my source code. :p Regards, Alex

    Read the article

  • Scientific Linux - mysql and apachefail to start on reboot

    - by Derek Deed
    Both mysqld and httpd fail to restart following a reboot of the server, although chkconfig --list shows both daemons set to on for run levels 2,3,4 & 5 All control is being exectuted via Webmin Reboot server – MySQl and Apache not running MySQL Database Server MySQL version 5.1.69 MySQL is not running on your system - database list could not be retrieved. Click this button to start the MySQL database server on your system with the command /etc/rc.d/init.d/mysqld start. This Webmin module cannot administer the database until it is started. Apache Webserver Apache version 2.2.15 Start Apache Search Docs.. Global configuration Existing virtual hosts Create virtual host Select all. | Invert selection. Default Server Defines the default settings for all other virtual servers, and processes any unhandled requests. Address Any Port Any Server Name Automatic Document Root /var/www/drupal Virtual Server Processes all requests on port 443 not handled by other virtual servers. Address Any Port 443 Server Name Automatic Document Root /var/www/drupal Select all. | Invert selection. chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart Apache chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off Manually Restart MySQL chkconfig --list mysqld mysqld 0:off 1:off 2:on 3:on 4:on 5:on 6:off Everything now running okay; but no difference in the chkconfig outputs above. Set chkconfig --levels 235 httpd on /etc/init.d/httpd start The same for mysqld but no change in operation. Log files show that the shutdown has been completed successfully; but there is no indication of the service restarting until it is executed manually: 131112 13:59:15 InnoDB: Starting shutdown... 131112 13:59:16 InnoDB: Shutdown completed; log sequence number 0 881747021 131112 13:59:16 [Note] /usr/libexec/mysqld: Shutdown complete 131112 13:59:16 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 131112 14:09:52 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131112 14:09:52 InnoDB: Initializing buffer pool, size = 8.0M 131112 14:09:52 InnoDB: Completed initialization of buffer pool [Tue Nov 12 13:59:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 13:59:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 13:59:13 2013] [notice] Digest: done [Tue Nov 12 13:59:14 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Tue Nov 12 13:59:14 2013] [notice] caught SIGTERM, shutting down [Tue Nov 12 14:27:13 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Tue Nov 12 14:27:13 2013] [notice] Digest: generating secret for digest authentication ... [Tue Nov 12 14:27:13 2013] [notice] Digest: done [Tue Nov 12 14:27:13 2013] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.3 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations Is anyone able to shed any light on this problem, Cheers, Derek.

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Uninstall php and nginx or fix setup

    - by jreed121
    First off, I'm a huge linux noob - sorry... I'm trying to setup nginx with php-fpm on debian and I'm pretty sure that I've completely screwed it up. nginx seems to be running fine because I can it it from a web browser and it load the stock "Welcome to nginx!" page. I'm not so sure about php-fpm though. When I try something like # restart php-fpm I get: bash: restart: command not found First off php-fpm some how got installed as php5-fpm when I do root@server:/etc/init.d# ls, which seems to contradict every tutorial and help doc I've read (supposed to be 'php-fpm'). I can restart it with this: service php5-fpm restart And just enter the package name 'php5-fpm' I get this: root@server:~# php5-fpm [17-Nov-2012 23:15:36] NOTICE: PHP message: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/suhosin.so' - /usr/lib/php5/20100525/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0 [17-Nov-2012 23:15:36] ERROR: An another FPM instance seems to already listen on /var/run/php5-fpm.sock [17-Nov-2012 23:15:36] ERROR: FPM initialization failed The root for nginx is /usr/share/nginx/html, when I try to navigate to a .php file in there with my web browser, it tries to download the file instead of interpret it. I would like this folder to be in my user's home directory ie: /home/administrator/www or /home/nginx/www. I know in order to do this I need to modify nginx.conf, but I find that configuration file difficult to understand. I suppose the fact that my .php scripts aren't being handled is my bigger problem anyways. When I try to see what running on port 9000 (php-fpm default port) with lsof -i :9000 it returns nothing - I guess indicating that it isn't listening. then I head over to vim /etc/php5/fpm/php-fpm.conf and there is no where to designate a port number. So should I just uninstall everything and start from scratch? If so, how do I clean it all up? Any suggestions for a tutorial once I'm ready to try again? Should I attempt to troubleshoot this mess? If so where should I start? Sorry guys, I'm feeling pretty stupid and lost right now. I'm not sure what my next steps are in trying to resolve this issue are. I realize that this is a horrible question for this type of Q&A site, but I'd really appreciate any guidance.

    Read the article

  • Xubuntu login hangs after Cancel Button click

    - by akester
    I'm running Xubuntu 12.04 (I installed using the alternative installer.) running in Virtaulbox 4.1.20 My issue is with the login screen (lightdm-gtk-greeter). It usually runs just fine, and allows users to log in and out but it will hang if the user presses the cancel button. The interface is still working (ie, shutdown menu is still available, I can switch to a different tty) but the username or password field (depending on when the button is hit) is disabled. Restarting lightdm will reset the screen, but the problem still exists. The issue is only with the cancel button. The login, session, and language buttons/menus as well as the accessibility and shutdown menu appear to work normally. I've modified some of the config files for lighdm-gtk-greeter, specifically /etc/lightdm/lighdm-gtk-greeter.conf to change the background image and /etc/lightdm/lightdm.conf to disable the user list. I did not check to see if the error existed before the changes took place. The changes have been restored the default settings but the problem persists. Here is the output of /var/log/lightdm/lightdm.log when the screen is hung: [+0.00s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=2072 [+0.00s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.00s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.00s] DEBUG: Registered seat module xlocal [+0.00s] DEBUG: Registered seat module xremote [+0.00s] DEBUG: Adding default seat [+0.00s] DEBUG: Starting seat [+0.00s] DEBUG: Starting new display for greeter [+0.00s] DEBUG: Starting local X display [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.03s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.04s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.04s] DEBUG: Launching X Server [+0.05s] DEBUG: Launching process 2078: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+0.05s] DEBUG: Waiting for ready signal from X server :0 [+0.05s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.05s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+0.28s] DEBUG: Got signal 10 from process 2078 [+0.28s] DEBUG: Got signal from X server :0 [+0.28s] DEBUG: Connecting to XServer :0 [+0.29s] DEBUG: Starting greeter [+0.29s] DEBUG: Started session 2082 with service 'lightdm', username 'lightdm' [+0.36s] DEBUG: Session 2082 authentication complete with return value 0: Success [+0.36s] DEBUG: Greeter authorized [+0.36s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+0.36s] DEBUG: Session 2082 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/lightdm-gtk-greeter [+0.58s] DEBUG: Greeter connected version=1.2.1 [+0.58s] DEBUG: Greeter connected, display is ready [+0.58s] DEBUG: New display ready, switching to it [+0.58s] DEBUG: Activating VT 7 [+1.04s] DEBUG: Greeter start authentication for andrew [+1.04s] DEBUG: Started session 2137 with service 'lightdm', username 'andrew' [+1.09s] DEBUG: Session 2137 got 1 message(s) from PAM [+1.09s] DEBUG: Prompt greeter with 1 message(s) [+17.24s] DEBUG: Cancel authentication [+17.24s] DEBUG: Session 2137: Sending SIGTERM

    Read the article

  • SQL Server 2008 R2 upgrade fails on upgrade rule check

    - by Tim
    I'm trying to upgrade an evaluation instance of SQL Server 2008 to a fully licensed instance of SQL Server 2008 R2. I made it most of the way through the installer, but I'm getting stopped at the Upgrade Rules page - the SQL Server Analysis Services Upgrade Service Functional Check is failing. The specific error I get: Rule "SQL Server Analysis Services Upgrade Service Functional Check" failed. The current instance of the SQL Server Analysis Services service cannot be upgraded because the Analysis Services service is disabled or not online. Please start the service and then run the upgrade rules check again. Simple enough - just need to start the service. Here's where it gets troublesome. When I open Services and go to start the SQL Server Analysis Services (MSSQLSERVER) service, it provides me the following message: The SQL Server Analysis Services (MSSQLSERVER) service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. Trying from the command line as Administrator yields: PS C:\Windows\System32 net start MSSQLServerOLAPService The SQL Server Analysis Services (MSSQLSERVER) service is starting... The SQL Server Analysis Services (MSSQLSERVER) service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. I've tried changing the logon setting of this service to Administrator, a user with admin privileges, and both the Local System and Network Service accounts - nothing works. In addition, when I look at the service through the SQL Server Configuration Manager (also run as Administrator), attempting to change the logon setting for the service results in the message: The server threw an exception. [0x80010105] I have no need for analysis services themselves - all I need is for this one service to be running long enough to do the R2 upgrade, then it can shut down again. Any thoughts on how to get the Analysis Services service running? Update: Checking the event log, I found an error logged to the Application log from the MSSQLServerOLAPService. It has event ID 0, task category (289), and says: The service cannot be started: XML parsing failed at line 1, column 4: Unrecognized input signature.

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Server load spikes several times a day, load average for the past month is 5 times the load average all year

    - by AMF
    My Munin notifications set up for our (Debian) LAMP cluster have been notifying me continuously that our load on our production machine has been at dangerous levels. While the average load all year typically runs between 2 and 8, the load in the past month and only the past month -- has been skyrocketing to 10, 18, and occasionally even 50-60. The spikes last only 5-10 minutes at a time and occur about every 2-3 hours. The spikes do not effect performance only because I have a script that sends traffic off our server to a mirror CDN when the load goes above 10. I've looked for cron jobs that correlate with this timeframe but there is nothing I can see that would cause this. Site traffic is also normal (we receive about 200K visits per day). I'm also trying to think of anything I've changed around the time this problem began, and I really cannot think of anything. This is probably not much to go on. Maybe there is a clue in the top print-out (below) that I'm not seeing. How do I proceed to find the cause? -- Typical top when the load is NOT spiking: top - 11:13:09 up 472 days, 25 min, 1 user, load average: 6.08, 4.29, 3.80 Tasks: 105 total, 1 running, 104 sleeping, 0 stopped, 0 zombie Cpu(s): 41.2%us, 5.8%sy, 0.0%ni, 49.5%id, 2.7%wa, 0.1%hi, 0.7%si, 0.0%st Mem: 3369592k total, 2166980k used, 1202612k free, 559504k buffers Swap: 2650684k total, 1892k used, 2648792k free, 1129116k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 32046 apache 15 0 36300 12m 9828 S 20 0.4 0:01.97 apache2 32679 apache 15 0 36568 13m 10m S 19 0.4 0:01.69 apache2 31441 apache 15 0 36616 13m 10m S 19 0.4 0:04.13 apache2 31477 apache 15 0 36596 13m 9.8m S 15 0.4 0:01.99 apache2 31993 apache 15 0 36876 16m 12m S 12 0.5 0:02.01 apache2 31782 apache 15 0 36836 14m 10m S 8 0.4 0:02.17 apache2 32198 apache 15 0 36536 13m 10m S 7 0.4 0:01.59 apache2 880 apache 15 0 36508 9708 6236 S 7 0.3 0:00.42 apache2 31945 apache 17 0 36876 16m 13m S 5 0.5 0:03.17 apache2 32197 apache 16 0 36636 10m 7504 S 5 0.3 0:02.70 apache2 32326 apache 15 0 37024 11m 7632 S 5 0.3 0:02.15 apache2 32565 apache 15 0 37280 13m 9.8m S 5 0.4 0:03.75 apache2 32676 apache 15 0 36896 16m 12m S 4 0.5 0:00.95 apache2 32678 apache 15 0 36536 12m 9692 S 4 0.4 0:02.27 apache2 974 apache 16 0 37064 9888 6016 D 4 0.3 0:00.13 apache2 32150 apache 16 0 36832 13m 10m S 3 0.4 0:01.74 apache2 31780 apache 16 0 36848 11m 7660 S 3 0.3 0:02.87 apache2

    Read the article

  • VPN - force a selective range of ip to run on VPN (linux)

    - by Francesco
    Preface: I know there are similar question here and there however I'm a kind of newbie on Net stuff so I need an answer on this specific scenario, hoping that can help others too as it is a common problem Let say I cannot do anything on the local switch to change the local ip range, I don't want to use any complicate trick as use VMachine to hide the local ip range but I want to use net tools to solve the issue. Scenario my local net assign me an IP of this class 192.168.1.xxx (ex. 192.168.1.116) and my VPN (VPNC) assign me IP of same class 192.168.1.xxx (ex. 192.168.1.247) Obviously I need VPN to access local address (ex. 192.168.1.100) but when I open any address of the class 192.168.1.xx the route point to my local net and not to the VPN ones. I'm on linux and i'd like gui solution (network manager) in case it is not possible let play with route command. here what network manager offer me: Here my actual route once connected to the VPN: Here some route information (route -n) Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 ppp0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 182.71.21.106 192.168.1.1 255.255.255.255 UGH 0 0 0 wlan0 192.168.1.0 0.0.0.0 255.255.255.0 U 9 0 0 wlan0 192.168.1.246 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 Here my ifconfig : ppp0 Link encap:Point-to-Point Protocol inet addr:192.168.1.247 P-t-P:192.168.1.246 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1400 Metric:1 RX packets:3415 errors:0 dropped:0 overruns:0 frame:0 TX packets:2525 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:3682328 (3.6 MB) TX bytes:402315 (402.3 KB) wlan0 Link encap:Ethernet HWaddr 4c:eb:42:06:a3:a6 inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::4eeb:42ff:fe06:a3a6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:72598 errors:0 dropped:0 overruns:0 frame:0 TX packets:42300 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:76000532 (76.0 MB) TX bytes:13919400 (13.9 MB) The Question So basically I would like to add a rule to force this particular address (192.168.1.100) on the VPN and not on my local net

    Read the article

< Previous Page | 777 778 779 780 781 782 783 784 785 786 787 788  | Next Page >