Search Results

Search found 174 results on 7 pages for 'philip'.

Page 2/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Wifi is connected but internet not working?

    - by Philip
    I have a lenovo x230 running ubuntu 12.04lts. I've been using it just fine for a while, but all of a sudden the internet's stopped working. It's connected to the router just fine (I can see the antenna bars saying it's connected) but when I try to go to a webpage, it's just loading forever. I've tried the fix listed here (http://ubuntuforums.org/showthread.php?t=1985079) but it hasn't done anything (the instructions are a little confusing though). I'm booting windows 7 right now and the internet works just fine when I'm on windows, so I know it must be a problem with ubuntu. I haven't touched any network settings before this problem happened, and I've been using it just fine for months, so I don't know why I'm getting this problem all of a sudden. Oh and I haven't updated anything in a few weeks (you know when the update manager pops up every so often listing all the things there's an update for) so I know it's not some update that broke it.

    Read the article

  • Supporting early versions of Android

    - by Philip Sheard
    What policy do developers have when it comes to supporting earlier versions of Android? I still support Android 2.1 and above, but this means that I am unable to use features such as the action bar. Over 40% of my users are still running versions below 3.0, so I feel somewhat constrained about this. The problem is that 3.x was not very successful, so 2.3.x will be with us for some time. But all new devices will now be shipping with 4.x. I am wondering whether 4.x users are more likely to pay for an app, while most 2.3.x users are just looking.

    Read the article

  • What are the pros and cons of Coffeescript?

    - by Philip
    Of course one big pro is the amount of syntactic sugar leading to shorter code in a lot of cases. On http://jashkenas.github.com/coffee-script/ there are impressive examples. On the other hand I have doubts that these examples represent code of complex real world applications. In my code for instance I never add functions to bare objects but rather to their prototypes. Moreover the prototype feature is hidden from the user, suggesting classical OOP rather than idiomatic Javascript. The array comprehension example would look in my code probably like this: cubes = $.map(list, math.cube); // which is 8 characters less using jQuery...

    Read the article

  • What does the Sys_PageIn() function do in Quake?

    - by Philip
    I've noticed in the initialization process of the original Quake the following function is called. volatile int sys_checksum; // **lots of code** void Sys_PageIn(void *ptr, int size) { byte *x; int j,m,n; //touch all memory to make sure its there. The 16-page skip is to //keep Win 95 from thinking we're trying to page ourselves in (we are //doing that, of course, but there's no reason we shouldn't) x = (byte *)ptr; for (n=0 ; n<4 ; n++) { for (m=0; m<(size - 16 * 0x1000) ; m += 4) { sys_checksum += *(int *)&x[m]; sys_checksum += *(int *)&x[m + 16 * 0x10000]; } } } I think I'm just not familiar enough with paging to understand this function. the void* ptr passed to the function is a recently malloc()'d piece of memory that is size bytes big. This is the whole function - j is an unreferenced variable. My best guess is that the volatile int sys_checksum is forcing the system to physically read all of the space that was just malloc()'d, perhaps to ensure that these spaces exist in virtual memory? Is this right? And why would someone do this? Is it for some antiquated Win95 reason?

    Read the article

  • Losing 'post' requests sent to Pylons paster server

    - by Philip McDermott
    I'm sending post requests to a Pylons server (served by paster serve), and if I send them with any frequency many don't arrive at the server. One at a time is ok, but if I fire off a few (or more) within seconds, only a small number get dealt with. If I send with no post data, or with get, it works fine, but putting just one character of data in the post fields causes massive losses. For example, sending 200, 2 will come back. Sending 100 more slowly, 10 will come back. I'm making the requests form inside a Qt application. Tis will work ok (no data): QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); And this will result in only a fraction of the requests being dealt with: QString postFields = "" QNetworkRequest request(QUrl("http://server.com/endpoint")); QNetworkReply *reply = networkAccessManager-post(request, postFields.toAscii()); I've played around with turning on use_threadpool, and other options (threadpool_workers, threadpool_max_requests = 300), of which some combinations can alter the results slightly (best case 10 responses in 200). If I send similar requests to other (non paster) servers, the replies come back ok, so I'm almost certain its'a paster serve config issue. Any help or advice greatly appreciated. Thanks Philip

    Read the article

  • PHP | SQL syntax error when inserting array

    - by Philip
    Hi guys, I am having some trouble inserting an array into the sql database. my error is as follows: Unable to add : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '06:45:23,i want to leave a comment)' at line 1 My query var_dump is: string(136) "INSERT INTO news_comments (news_id,comment_by,comment_date,comment) VALUES (17263,Philip,2010-05-11 06:45:23,i want to leave a comment)" My question is how can i add an empty value to id as it is the primary key and not news_id my insert function looks like this: function insertQuery($tbl, &$data) { global $mysqli; $_SESSION['errors'] = array(); require_once '../config/mysqli.php'; $query = "INSERT INTO $tbl (".implode(',',array_keys($data)).") VALUES (".implode(',',array_values($data)).")"; var_dump($query); if($result = mysqli_query($mysqli, $query)) { //$id = mysqli_insert_id($mysqli); print 'Very well done sir!'; } else { array_push($_SESSION['errors'], 'Unable to add : ' . mysqli_error($mysqli)); } } Note: arrays are not my strong point so i may be using them in-correctly!

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • How do I make kdump use a permissible range of memory for the crash kernel?

    - by Philip Durbin
    I've read the Red Hat Knowledgebase article "How do I configure kexec/kdump on Red Hat Enterprise Linux 5?" at http://kbase.redhat.com/faq/docs/DOC-6039 and http://prefetch.net/blog/index.php/2009/07/06/using-kdump-to-get-core-files-on-fedora-and-centos-hosts/ The crashkernel=128M@16M kernel parameter works fine for me in a RHEL 6.0 beta VM, but not on the RHEL 5.5 hosts I've tried. dmesg shows me: Memory for crash kernel (0x0 to 0x0) notwithin permissible range disabling kdump Here's the line from grub.conf: kernel /vmlinuz-2.6.18-194.3.1.el5 ro root=/dev/md2 console=ttyS0,115200 panic=15 rhgb quiet crashkernel=128M@16M How do I make kdump use a permissible range of memory for the crash kernel?

    Read the article

  • Is there a blog tool with support for tagging posts, only displaying posts with certain combinations

    - by Philip
    What I want: A blog. I can tag posts, e.g. "A" or "B" or "all," and then you can either 1) click to only view posts tagged "A" or "all" or 2) even more ideally, I can set it so you automatically see posts tagged "A" or "all" when you log in. LaTeX support--I can type in LaTeX in the editor and it will show the math properly. No anonymous anything--must sign up and be logged in to view and comment Not as important, but convenient: Admin controls, e.g. detailed statistics, see who posts or views what when, control who posts / views, etc. Hosting: Ideally, if there's some software I can install on "my" own server, that would be ideal. But if we can't host it, it'd still be good to find some free (or maybe even paid) service elsewhere that would host the blog if it provided those tools. Any thoughts? I have no experience with this. Thanks!

    Read the article

  • Where should CentOS users get /usr/share/virtio-win/drivers for virt-v2v?

    - by Philip Durbin
    I need to migrate a number of virtual machines from VMware ESX to CentOS 6 KVM hypervisors. Ultimately, I wrote an RPM spec file that solved my problem at https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec but I'm not sure if there's another RPM in base CentOS or EPEL (something standard) I should be using instead. Originally, I was getting this "No root device found in this operating system image" error when attemting to migrate a Window 2008 VM. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm virt-v2v: No root device found in this operating system image. . . . but I solved this with a simply yum install libguestfs-winsupport since the docs say: If you attempt to convert a virtual machine using NTFS without the libguestfs-winsupport package installed, the conversion will fail. Next I got an error about missing drivers for Windows 2008. . . [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: Installation failed because the following files referenced in the configuration file are required, but missing: /usr/share/virtio-win/drivers/amd64/Win2008 . . . and I resolved this by grabbing an iso from Fedora at http://alt.fedoraproject.org/pub/alt/virtio-win/latest/ as recommended by http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers and building an RPM from it with this spec file: https://github.com/fasrc/virtio-win/blob/master/virtio-win.spec Now, virt-v2v exits without error: [root@kvm01b ~]# virt-v2v -ic 'esx://my-vmware-hypervisor.example.com/' \ -os transferimages --network default my-vm my-vm_my-vm: 100% [====================================]D virt-v2v: my-vm configured with virtio drivers. [root@kvm01b ~]# Now, my question is, rather that the virtio-win RPM from the spec file I wrote, is there some other more standard RPM in base CentOS or EPEL that will resolve the error above? Here's a bit more detail about my setup: [root@kvm01b ~]# cat /etc/redhat-release CentOS release 6.2 (Final) [root@kvm01b ~]# rpm -q virt-v2v virt-v2v-0.8.3-5.el6.x86_64 See also Bug 605334 – VirtIO driver for windows does not show specific OS: Windows 7, Windows 2003

    Read the article

  • What are the implications of expanding an internal subnet mask?

    - by Philip
    Our network is currently working on a 192.168.0.x subnet, all controlled through DHCP, except for the few main servers who have hard-configured IP address settings. What would I kill if I changed the DHCP-published subnet mask from 255.255.255.0 to 255.255.0.0? The reason for doing this is not because we have a huge sudden influx of machines, but because I'd like to start partitioning specific devices into specific IP ranges (to be neat and tidy). For what its worth, I don' plan on changing the allocated DHCP address range, but rather want to move some of the reserved and excluded DHCP addresses out of the address pool. e.g. printers will be 192.168.2.x I will obviously need to change the subnet mask manually on my manually configured devices.

    Read the article

  • EC2 Filesystem / Files stored on the wrong partiton after launching new instance from AMI

    - by Philip Isaacs
    Today I set up a new EC2 Instance from and AMI I created from an older EC2 instance. When I launched the new instance I took the AMI that was on a small instance and launched with a medium instance. From what I can tell this is pretty standard stuff. But here's the stang part. According to AWS these are the differences Small Instance (Default) 1.7 GB of memory, 1 EC2 Compute Unit (1 virtual core with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform Medium Instance 3.75 GB of memory, 2 EC2 Compute Units (1 virtual core with 2 EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform Okay now here's where I'm having an issue. I when I log into the new bigger instance it still reports only having 1.7 GB of ram. The other strange part is that all my old partitions are still their in the same configurations. I see a new larger partition /mnt which is essential empty. Filesystem Size Used Avail Use% Mounted on /dev/sda1 7.9G 5.9G 1.6G 79% / none 846M 120K 846M 1% /dev none 879M 0 879M 0% /dev/shm none 879M 76K 878M 1% /var/run none 879M 0 879M 0% /var/lock none 879M 0 879M 0% /lib/init/rw /dev/sda2 335G 195M 318G 1% /mnt /dev/sdf 16G 9.9G 5.1G 67% /var2 This EC2 is a web server and I was serving files off the /var2 directory but for some reason the instance is storing everything on / Okay here's what I'd like to do. Move all my website files to /mnt and have the web server point to that. Any suggestions? If it helps here is what my fstab looks like as well. root@myserver:/var# mount -l /dev/sda1 on / type ext3 (rw) [cloudimg-rootfs] proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) /dev/sda2 on /mnt type ext3 (rw) /dev/sdf on /var2 type ext4 (rw,noatime) I hope this question makes sense. Basically i want my old files on this new partition. Thanks in advance

    Read the article

  • What Is the Keyboard Shortcut for Moving to Last Message in Mac OS X Mail.app?

    - by Philip Durbin
    I'm on Mac OS X 10.5.8 (Leopard). In Mail, I have the first message in my Inbox selected and I'm trying to navigate to the last message using my keyboard. In Thunderbird, I just hit the End key, which for me is "Function-right arrow" because I'm on a MacBook Pro. In Mail, with the first message selected, if I hit "Function-right arrow" (i.e. End), the scroll bar moves down, allowing me to see the messages at the bottom of the list, but the first message at the top of the list is still selected. What I want is for the last message to be selected. I've tried lots of key combinations and searched for the answer but haven't been able to find it. Please help. I posted this originally at discussions.apple.com but the only advice I received was to file a bug with Apple, which I did.

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • Unable to click on tabs in Firefox unless I press Command-Q

    - by Philip
    Why does clicking on tabs in Firefox sometimes fail until I hit cmd-Q, and then why does that not quit but instead make clicking on the tabs work, and how do I stop that behavior? I'm running Firefox 3.5.5 on Mac OS X 10.5 and this has been happening for a while (with previous versions of FF as well). I can't forcibly reproduce the behavior, but every now and then (few days?) I just can't click on tabs or on the x's to close the tabs. I can still ctrl-tab between tabs, though. But if I press cmd-Q, instead of quitting, Firefox seems to seize for a second and then I can click on tabs and click to close them just fine. No clue why this is happening or how to stop it. And I do have tons of extensions installed, so it's plausible one of them is the problem..... Thanks.

    Read the article

  • Sending cron output to a file with a timestamp in its name

    - by Philip Morton
    I have a crontab like this on a LAMP setup: 0 0 * * * /some/path/to/a/file.php > $HOME/cron.log 2>&1 This writes the output of the file to cron.log. However, when it runs again, it overwrites whatever was previously in the file. How can I get cron to output to a file with a timestamp in its filename? An example filename would be something like this: 2010-02-26-000000-cron.log I don't really care about the format, as long as it has a timestamp of some kind. Thanks in advance.

    Read the article

  • How do I get rid of phantom bookmarks in Google Chrome on Mac OS X 10.6?

    - by Philip
    I'm running Chrome 5.0.375.38 on OS X 10.6 Snow Leopard and although I'm positive that when I installed it I told it NOT to import my Firefox bookmarks, it nevertheless still accessed my OLD Firefox bookmarks (including some that I deleted) when I used the location bar. HOWEVER, when I opened the bookmarks manager, it said that I have no bookmarks whatsoever. Seeking to solve this problem, I installed XMarks on both FF and Chrome, and forced Chrome to download the server bookmarks. Now Chrome lists all my current FF bookmarks, but STILL sees the old, phantom bookmarks from when I first installed Chrome in the location bar, even though when I search for these same bookmarks in the bookmarks manager they don't show up. Aargh! Any ideas? Even if there's some way to force-kill-wipeout-clean-erase ALL my Chrome bookmarks that's fine as long as it kills the phantom ones b/c I can still overwrite with XMarks. Thanks!

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • Ubuntu with Netatalk and Samba TimeMachine can't connect

    - by Philip
    I installed netatalk on my Ubuntu Server a few weeks ago and configured it so that I could use Timemachine from my mac to backup on a server instead of a external hard drive. It worked really good until yesterday when I installed Samba to be able to share certain folders on my server to my mac. Now I receive an error msg: There are no shares available or you are not allowed to access them on the server. Please contact your system administrator to resolve the problem. From what I understand is that the problem is on the server and not on my mac. I have tried to restart the computer and without adding any of the folders Samba is sharing adding the timemachine "afp://...@...". Is there a problem running them both at the same time, do I need to configure samba so that it doesn't reject afp? I'm pretty new at this...

    Read the article

  • Why is my Mac beeping at me in a Sprint-Nextel-PTT kind of way?

    - by Philip
    I'm really stumped about this one. Before on OSX 10.5 and now on OSX 10.6, my Mac occasionally beeps at me. It's a split-second "bee-bee-beep" that sounds vaguely similar to the push-to-talk beep you hear on Sprint/Nextel PTT commercials. I haven't been able to isolate what's running when it happens or what happens before it happens. Possible culprits are Quicksilver, Firefox, DropBox, Evernote helper, TrueCrypt, Wally. Any thoughts? Thanks.

    Read the article

  • How do I make Linux recognize a new SATA /dev/sda drive I hot swapped in without rebooting?

    - by Philip Durbin
    Hot swapping out a failed SATA /dev/sda drive worked fine, but when I went to swap in a new drive, it wasn't recognized: [root@fs-2 ~]# tail -18 /var/log/messages May 5 16:54:35 fs-2 kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen May 5 16:54:35 fs-2 kernel: ata1: SError: { PHYRdyChg CommWake } May 5 16:54:40 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:45 fs-2 kernel: ata1: device not ready (errno=-16), forcing hardreset May 5 16:54:45 fs-2 kernel: ata1: soft resetting link May 5 16:54:50 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:54:55 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:54:55 fs-2 kernel: ata1: soft resetting link May 5 16:55:00 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:05 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:05 fs-2 kernel: ata1: soft resetting link May 5 16:55:10 fs-2 kernel: ata1: link is slow to respond, please be patient (ready=0) May 5 16:55:40 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:40 fs-2 kernel: ata1: limiting SATA link speed to 1.5 Gbps May 5 16:55:40 fs-2 kernel: ata1: soft resetting link May 5 16:55:45 fs-2 kernel: ata1: SRST failed (errno=-16) May 5 16:55:45 fs-2 kernel: ata1: reset failed, giving up May 5 16:55:45 fs-2 kernel: ata1: EH complete I tried a couple things to make the server find the new /dev/sda, such as rescan-scsi-bus.sh but they didn't work: [root@fs-2 ~]# echo "---" > /sys/class/scsi_host/host0/scan -bash: echo: write error: Invalid argument [root@fs-2 ~]# [root@fs-2 ~]# /root/rescan-scsi-bus.sh -l [snip] 0 new device(s) found. 0 device(s) removed. [root@fs-2 ~]# [root@fs-2 ~]# ls /dev/sda ls: /dev/sda: No such file or directory I ended up rebooting the server. /dev/sda was recognized, I fixed the software RAID, and everything is fine now. But for next time, how can I make Linux recognize a new SATA drive I have hot swapped in without rebooting? The operating system in question is RHEL5.3: [root@fs-2 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.3 (Tikanga) The hard drive is a Seagate Barracuda ES.2 SATA 3.0-Gb/s 500-GB, model ST3500320NS. Here is the lscpi output: [root@fs-2 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0a.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0d.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0e.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) Update: In perhaps a dozen cases, we've been forced to reboot servers because hot swap hasn't "just worked." Thanks for the answers to look more into the SATA controller. I've included the lspci output for the problematic system above (hostname: fs-2). I could still use some help understanding what exactly isn't supported hardware-wise in terms of hot swap for that system. Please let me know what other output besides lspci might be useful. The good news is that hot swap "just worked" today on one of our servers (hostname: www-1), which is very rare for us. Here is the lspci output: [root@www-1 ~]# lspci 00:00.0 RAM memory: nVidia Corporation MCP55 Memory Controller (rev a2) 00:01.0 ISA bridge: nVidia Corporation MCP55 LPC Bridge (rev a3) 00:01.1 SMBus: nVidia Corporation MCP55 SMBus (rev a3) 00:02.0 USB Controller: nVidia Corporation MCP55 USB Controller (rev a1) 00:02.1 USB Controller: nVidia Corporation MCP55 USB Controller (rev a2) 00:04.0 IDE interface: nVidia Corporation MCP55 IDE (rev a1) 00:05.0 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.1 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:05.2 IDE interface: nVidia Corporation MCP55 SATA Controller (rev a3) 00:06.0 PCI bridge: nVidia Corporation MCP55 PCI bridge (rev a2) 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a3) 00:0b.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0c.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:0f.0 PCI bridge: nVidia Corporation MCP55 PCI Express bridge (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 00:19.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:19.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:19.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:19.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:19.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 03:00.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) 04:00.0 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 04:00.1 PCI bridge: NEC Corporation uPD720400 PCI Express - PCI/PCI-X Bridge (rev 06) 09:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1064ET PCI-Express Fusion-MPT SAS (rev 04)

    Read the article

  • Storage (EBS) attached to my EC2 instance reporting files on two devices at once

    - by Philip Isaacs
    I have on EC2 instance with two attached EBS drives. One drive /dev/sda1 is mounted on / The second /dev/sda2 is mounted on /var2 So here's what's strange. Whenever I add any files to /var2 the it is also reporting that the / device is also filling up. As if they are the same device. It's so strange. So if I were to save a 10 mb file to a directory on /var2. Both / and /var2 use 10 mb of space up. Is this bizarre?

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >