Search Results

Search found 8185 results on 328 pages for 'live meetings'.

Page 263/328 | < Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >

  • Nagios check_host_alive and check_ping not showing host as down

    - by Kyle
    I am using the check_host_alive command to send 5 packets every minute to all my routers at remote locations. I noticed today I received a notification from The AT&T Global Client Support Center that a router was down (which can take 5-30 minutes to send these notices out) and never received a notice from Nagios. I went onto Nagios and it is was showing the host as alive with a latency of 0ms. This tells me it is seeing the automated response from my router in the data center that, "TTL expired in transit" as a reply from the remote router. Is there anyway for me to tell nagios to check where the reply is comming from? I feel like other people have to of had this issue... I tested it with the check_ping command and it produced the same results. I have the command defined has %hostname% and the proper IP in the host definition, and it works fine for telling me the latency is high. Any ideas are welcome, I have already exercised my Google skills with no results. EDIT: root@IM-UBTU:/# /usr/local/nagios/libexec/check_ping -H 192.168.250.1 -w 100.0,10% -c 200.0,20% -vvv CMD: /bin/ping -n -U -w 10 -c 5 192.168.250.1 Output: PING 192.168.250.1 (192.168.250.1) 56(84) bytes of data. Output: From 10.69.10.2 icmp_seq=1 Time to live exceeded It knows something is wrong why doesn't it give me a warning?

    Read the article

  • Apache directory authorization bug (clicking cancel gives acces to partial content)

    - by s4uadmin
    I got a minor problem (as the site is not high priority) but still a very interesting one. I have an apache root domain wherein other sites live "/var/www/" And I have foo.example.com forwarding to "/var/www/foo-example" (wordpress site) The problem here is that when you go to foo.example.com you are prompted to enter credentials. If you hit cancel it gives you the access denied page. But when you go to the servers' direct IP (this gives you the default index page) and hit cancel when prompted for credentials it just keeps giving you the login screen, and after pressing cancel a few times more it gives (a perhaps cached) bare html part of the page. How do I prevent this from happening? Perhaps this is a bug... Even if I would block access to the root directory when going to the ip/foo-example it would still do this. And I want to keep all the directories within the www directory or at least all in the same. Thanks PS: here is my configuration: <VirtualHost *:80> DocumentRoot /var/www/wp-xxxxxxx/ ServerName beta.xxxxxxxxx.nl <Directory "/var/www/wp-xxxxxxxxx/"> Options +Indexes AuthName "xxxxxxxx Beta Site" AuthType Basic require valid-user Satisfy all AuthBasicProvider file AuthUserFile /var/www/wp-xxxxxxx/.htxxxxxxxxx order deny,allow allow from all </Directory> ServerAdmin [email protected] ServerAlias beta.xxxxxxx.nl </VirtualHost>

    Read the article

  • Create a wifi hotspot in a place where an authentication is required

    - by SoftTimur
    I live in a residence where Internet is provided via cable. Once the computer is connected to the cable, launching a browser will trigger an authentication, I have a username and password to enter, then the internet will be connected. With a gateway (e.g. Wireless Cable Voice Gateway Model CBVG834G) and 2 cables, two PCs can connect to the Internet with my account at the same time. Now the question is, I don't like the cable, and would like to create a wifi hotspot. It seems realizable with the same gateway. According to the instruction on page 2-4 of the manual: Enter http://192.168.0.1 in the address field of your Internet browser. Log in to the gateway with either of the default user names, MSO or admin... However, while connecting to the Internet successfully via cable and the gateway (e.g. google works), opening 192.168.0.1 oddly gives me an error on the browser: Does anyone know what happened? Is it due to the authentication required by my residence? Is there any other way to build a hotspot of wifi? PS: My system is MAC OS

    Read the article

  • 10GE network: Is it still deadly expensive? Any options?

    - by BarsMonster
    Hi! I am building home cluster where I going to have about 16 nodes which can live with 1G ports, but I really want to have 10GE on file server & central node. It's all local, so no need for cabels longer than 3-5m. And ofcourse I want to spend as little money as possible (not going to spend more than whole cluster costs) :-) What are my options? 1) Legacy solution is to take some 24-48 port 1GE switch, and connect to file/central nodes via 4-8 aggregated links. This will work I guess, cost is very acceptable, but I am not sure if it's ok to use that much aggregated links. And ofcourse it would be hard to double bandwidth when needed... :-D 2) Switch with several 10GE uplink 'ports'. As far as I see, they all require modules which costs about 1000$, so I will need 4 10G modules, and 2 10GE cards... Smells like way more than 5000$+... 3) Connect file & central node via 2 10G cards directly, and put 4 quadport 1GE NICs on fileserver. I am saving on 2 10G modules and a switch, fileserver will have to do packet routing, but it's still gonna have alot of CPU's left :-) 4) Any other options? Infiniband? 5) Are MyriNet adaptors works fine? I guess there are no cheaper options? 6) Hmm... Scrap fileserver, put it all on central node and provide dedicated 1GE port for each of the nodes... This is sad...

    Read the article

  • How to slow down audio files?

    - by verve
    I need a program (with an easy learning curve) that lets me slow down mp3 (at the very least this format) music and audiobook files. The software needs to be able to slow down the audio at the chosen speeds without altering the pitch and accuracy of the words being pronounced. Perhaps like the language software "Byki Deluxe's" "SlowSound" feature? I'm learning a foreign language (German) and I find the speeds at which the books are being read too fast. I need to hear the pronunciation of each word much more clearly to learn how to pronounce the words myself. Is there such a product out there? Now, I know you can slow down stuff in VLC but it sounds really artificial. I need something that slows down audio files without altering the accuracy of the words being pronounced. It doesn't have to be freeware; ease of use and quality is more important to me. Win 7 64-bit. IE 8. Edit: Are there any software-for-pay like Audacity? Only the beta works in Win 7. Also, I'd prefer to be able to slow down a file live and not have to create a new file to use the feature.

    Read the article

  • EC2 instances keep becoming inaccessible via SSH, can I use elastic loadbalancer to check SSH connectivity?

    - by Rick
    This is mainly an issue for my development ec2 server as it seems that my instance keeps becoming inaccessible via SSH. It happened yesterday so I killed that one and started a new one and happened again later today. The server still works, my web application is accessible in a web browser but whenever I try to connect via SSH I get a pemrission denied (public key) error message in my terminal. I am 100% sure I am doing nothing wrong as I can create a new instance of the exact same AMI (its a personal custom AMI), change absolutely nothing, including using the same .pem key, and then am able to SSH into that new instance using the exact same command as before (just changing the IP address). I understand that ec2 can have issues but having this happen every day seems a bit odd.. I am using an m2.xlarge instance so I don't know if these tend to be unstable, in the past I have used a small instance and had it running for months with no problems which is why I find this so odd. I am looking into using loadbalancing but it seems the only "health" checks they offer is for http or tcp so I'm not sure if I can make it monitor for SSH connectivity. This is important for development as I may make 1-2 new pushes of an application a day and use SSH to do this. I have a designer that needs to have the app always accessible as he works with the front-end files to test output with the live application. Anyways, any advice / info is appreciated

    Read the article

  • All application passwords lost on Windows 7

    - by Rynardt
    A couple of days ago I changed my Windows 7 login password. My laptop is on my company's domain, so password changes are done over the internal network. Since changing the password I noticed that all my saved Chrome passwords are missing. Also Skype, Windows Live, Internet Explorer and Outlook lost their saved passwords. I guess there could be more applications with lost passwords, but I have not opened them yet. This makes me think that most applications saves their passwords to a general password vault on the Windows system and this vault got somehow corrupted when I changed my domain login password for windows. Do anyone have any idea of how to fix this and prevent it from happening again? EDIT : More Info I do development work at the office, so most of the time I bypass the firewall and connect directly to the internet gateway. Now and then I would connect to the company wifi network to do printing and access files on a NAS. So by default my laptop does not connect to the wifi hotspot. On this occasion to update the password, I had to connect to the wifi. So referring to the comment by OmnipotentEntity below, could this have happened when the system rebooted without a connection to the network as the laptop does not auto connect to the wifi hotspot?

    Read the article

  • Number of routers in small community lock up and require reboot.

    - by Anthony Hiscox
    I live in a small town which has one primary ISP. Lately I have noticed that a number of wireless routers have been locking up and requiring a reboot before allowing any connections. This has affected two of my routers, my work router, and a few others. In all cases wired continued to function as usual. Often wireless clients can see the SSID but simply won't connect. I can only think of a few possibilities and was hoping someone here might be able to point me in the right direction: Our ISP is well known to be flaky, something they are doing is causing this, what that might be I have no clue it as seems to affect the wireless only. There's a power issue in town, given our remote location and reputation for crap electrical, this seems reasonable. Only one router was plugged in to a UPS, and I'm not sure of the quality. There is some bug in all the different firmware for every one of these routers (all different). That doesn't seem reasonable, unless; it's an unknown (or known) exploit or DoS of some sort being launched by a massive team of ninjas hell bent on forcing us all to be tethered to our walls by ethernet cables or; it's just been a coincidence and I'm just paranoid (this has some weight, I mean read 4 again). Anyone else experience similar issues and have some tips?

    Read the article

  • PHP 5.3.2 Upgrade on Windows

    - by mcondiff
    I have a development box running Windows XP, Apache 2.2.15 Mysql and PHP 5.2.6 About a month ago I updated the Apache to the latest version and it went swimmingly. I am not having the same success with upgrading PHP to the latest version. I backed up my PHP directory and then deleted it. Used the Windows Installer for PHP 5.3.2, installed as an Apache 2.2.X module. I can get "Hello World" and phpinfo() to come up but cannot get mysql to connect. I have the extension un-commented in the php.ini and shouldn't really have to touch the Apache httpd.conf file since I didn't change the directory of PHP. Not sure what I'm doing wrong here. I have to get this right and then upgrade the Live Server too so I want to get this down pat. I've tried to use installation guides on the web with no luck. Any info pertaining to this problem would be great. I'm also afraid that the other PHP modules may not load but cannot really tell anything past mysql not working.

    Read the article

  • Can't Install Win2k8 On KVM - Classic 0x80070013 error

    - by javano
    I am trying to install Win2k8 Std as a KVM guest on Debian Squeeze. As you can see from these screen shots; No drives are detected (I have blanked out a 20GB image for testing) - screenshot1 I am using this driver CD: - screenshot2 I have signed the Win7 driver (I assume this was the most appropriate one?) - screenshot3 I can now see an unpartitioned drive - screenshot4 But I can't create a new partition on here, getting the error code 0x80070013 - screenshot5 I have had this error code before but only on a physical server. If I remember correctly it was complaining because the disks were partitioned as GPT (because it was a server that was being re-purposed) so repartitioning with an MS-DOS table fixed that. This is a blank disk image though. What is wrong here, and how can I correct this? Thank you. UPDATE I have booted the VM with a Gparted-Live disk and formatted this volume with an MS-DOS partitioning scheme, and a single 20GB NTFS file system. Now when I boot the Win2k8 CD, load my drivers, I get a different error. As you can see at the bottom of screenshot6 "Windows cannot be installed on this hard drive space. Windows must be installed to a partition formatted as NTFS". Clicking format produces the error (0x80004005) on the screen, so I think this is still a driver issue because Windows can see the drive but not interact with it properly. Is that insane thinking?

    Read the article

  • Windows 7, going crazy with environment variables

    - by roymustang86
    So, I am trying to learn java. I installed the JDK and proceeded to write a few programs. Each time, I have to give the path to javac.exe to compile the .java file. SO, I decided to tweak the %PATH% variable. And no matter what I change it to, it doesn't work. when I do an echo %PATH%, I get 'Program' is not recognized as an internal or external command, operable program or batch file. This is my Path variable contents : C:\app\product\11.1.0\client_1\bin;%CommonProgramFiles%\Microsoft Shared\Windows Live;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;"C:\Program Files (x86)\Common Files\Roxio Shared\DLLShared\";"C:\Program Files\Broadcom\Broadcom 802.11";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\DLLShared\";"C:\Program Files (x86)\Common Files\Roxio Shared\OEM\12.0\DLLShared\";"C:\Program Files (x86)\Roxio\OEM\AudioCore\";"C:\Program Files (x86)\Intel\Services\IPT\" How do I work around this? the double quotes were not there before, I added it thinking the space was the problem.

    Read the article

  • Disable RAID to JBOD in server IBM x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Are there any critical reasons why one could not use ubuntu as a server platform?

    - by Chiggsy
    We were using Lenny. ( Well Sid, really ). Had to do that for development. I upgraded my server with ubuntu 10.04, for a different project. Noticed the packages. Wearing my developer hat, it's a no brainer. Everything we need is there. I'm the admin as well. We might need more than one "box" (running on VPS for now). I do not want to build things that apt would put on for me. It's not hard, but I'm going to need that time. The debian "box" has a bunch of stuff on it, that'll have to be integrated properly, but I think we are going live in a distressingly short time. (Just found out.) I am aware of the reflexive answers to this question. What I would like to ask is are there critical bugs or critical instabilities that would make one shy away from the ubuntu/server path? I could not find any bugs that would stop me, but perhaps there is something?

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • No LAN and SMB access, and Explorer not responsive, when using a second connection

    - by Lorenzo
    I apologize if this is a duplicate question, I know that there are several questions about multiple connection (LAN + LAN and LAN + dialup) but I haven't been able to find one that fits my scenario. I'm still using Windows XP on my corporate laptop, and I'm connected to the corporate LAN via Ethernet. The LAN NIC has a public IP address, although not accessible externally, obtained via the corporate DNS server. This connection is firewalled and requires a proxy to access Internet. To access Internet sites blocked by the corporate firewall, I use my smartphone via USB tethering. It is seen as a new LAN interface, and I get a private IP address (class 192.168..). There are two problems: The LAN is not accessible, as the default gateway goes to the tethering NIC. I'd like to solve this, but I can live with it. My PC becomes unresponsive if I use Windows Explorer to view local files, or even when I open the start menu. I guess that this is caused by attemps to connect to a mapped network drive. But I disabled the "Client for Microsoft Networks" in the tethering NIC. Why the system still hangs? Of course if I disable the Ethernet NIC, Explorer stops hanging. If you need further details, add a comment. Thanks!

    Read the article

  • Firefox window disappears

    - by Lord Torgamus
    Now this is odd. At some point in the last half hour, my Firefox window disappeared. I didn't notice, as I was working in another program at the time. No Firefox icon shows up with Alt-Tab, and no Firefox listing shows up under the Applications tab in the task manager. There is a Firefox entry under the Processes tab. Normally, I probably wouldn't have noticed, just opened Firefox up again, but I'm listening to an Internet radio station and the stream never stopped. When I did open a new Firefox window, it showed up in the Task Manager's applications tab. I'm running Windows XP, and my Firefox has the add-ons Adblock Plus, BetterPrivacy, Cert Viewer Plus, DOM Inspector, Firebug, Greasemonkey, Java Quick Starter, Live HTTP headers, Microsoft .NET Framework Assistant, NoScript, WebDeveloper and XPather. The radio station is Slacker; it's never given me any trouble before, and I've been using it for months. I don't think there was anything unusual in my open tabs; just a few static pages at non-sketchy sites like Java APIs, plus GMail and the aforementioned Slacker. Googling brought up a handful of similar-but-not-quite-the-same errors, none of which had useful resolutions. Does anyone know how to bring that window back and/or prevent this from happening again?

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • ubuntu 9.10 installer doesn't recognize the hard drive

    - by dan
    I downloaded Ubuntu 9.10 x86_64 and am trying to install it on a fairly modern system with a Gigabyte GA-MA770-UD3 motherboard. Ubuntu 9.04 installed fine and still will when I stick that disc in, but 9.10 doesn't see my hard drive (western digital 250GB). If I boot from the disc, I can install gparted and it does recognize the drive, but when I try to start the install process from the live disc, Ubuntu again doesn't recognize the hard drive. I checked /var/log/messages and see this: Nov 12 17:28:08 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was bad, boot with 'nodmraid'. Nov 12 17:28:08 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. Nov 12 17:28:08 ubuntu activate-dmraid: no raid sets and with names: "nvidia_ciiajheb-0" Nov 12 17:28:08 ubuntu activate-dmraid: ERROR: either the required RAID set not found or more options required. I checked my BIOS, SATA is enabled and is set to IDE mode, so there shouldn't be software RAID, but nonetheless, I added nodmraid to the boot line and tried again. It still doesn't recognize the drive. I checked /var/log/messages again and now see this: Nov 12 17:49:38 ubuntu activate-dmraid: Serial ATA RAID disk(s) detected. If this was boad, boot with 'nodmraid'. Nov 12 17:49:38 ubuntu activate-dmraid: Enabling dmraid support Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Nov 12 17:49:38 ubuntu activate-dmraid: WARNING: dmraid disabled by boot option Any ideas on things to try? I've tried all of the various BIOS settings for SATA. IDE,RAID, etc. Nothing seems to work.

    Read the article

  • Why does my Belkin Play router regularly stop working for a few minutes at a time?

    - by YeahStu
    I recently purchased a Belking Play dual-band router for my home. About every few hours, the router stops working or "cuts out" for several minutes before coming back online automatically. My old one did this as well. I figured out a main problem was my wireless home phone, which sends a 2.4GHz signal. Anytime someone would call the phone, my router would get interrupted. Therefore, I unplugged this phone and got a wired phone. Unfortunately, my wired phone has the same problem. Therefore, I unplugged the wired phone. Unfortunately, my router still has regular issues. I live in a home with neighbors within close proximity to me, so it might be possible that their devices are the ones causing me problems. How can I determine what is causing my router problems? I thought that the point of a dual-band router was that if a signal was interfering on one band then it would be uninterrupted on another. However, it seems that is not the case. Does anyone have any tips on how to troubleshoot this or any knowledge you can share to properly set my expectations?

    Read the article

  • KVM Guest Reboot Loop

    - by javano
    I have been pulled into a situation where a KVM server (CentOS 6.2) lost power and upon reboot one of the guests hasn't started up again (XP SP3). I have SSH'ed in and someone must have changed something relating to the hyper visors prior to the power loss, but not rebooted all the guests. This particular guest wouldn't start because it was configured to use /usr/bin/qemu-system-x86_64 which isn't there now (assuming it was before?). I changed it to use /usr/libexec/qemu-kvm as this is what all the other guests on this server seem to be using, and its booting up. Using virt-manager on my local machine I can connect to the display of the XP machine and it gets as far as this screen; http://support.gateway.com/emachines/issues/2-1131285152-01.gif The problem I face now, is that which ever option I choose the machine just reboots. So it's and endless loop. I thought that perhaps a file system error maybe present due to the unclean shutdown. There is an XP SP3 ISO mounted under the guest, which I booted from in an attempt to access the recovery tools, but I don't have the Administrator password! I am out of ideas, and it's turning out to be quite the conundrum. Should I use a 3rd party live CD to test the FS for errors? How else can I trouble shoot these restarts?

    Read the article

  • ssh-add insists on passphrase

    - by Sam Walton
    I have a new ssh key problem. I have successfully used them for years with Heroku, Git and other servers so I can login without having to issue a passphrase. A few weeks ago, I was unable to push a git repository on my machine to my Heroku and it responded with Permission denied (publickey). Hmm. Everything else but this Heroku function still works. So I ssh-keygen -t rsa -C "newHeroku" with no passphrase (hit return so it would be empty). So I enter: sudo chmod 600 ~/.ssh/newHeroku* Then: ssh-add ~/.ssh/newHeroku.pub Returning return for the passphrase asked it exits without error. The next step is to: ssh-add /Users/sam/.ssh/newHeroku.pub To verify that it's "live" I enter: ssh-add -l To which the output is still The agent has no identities. Okay, to eliminate variables, I repeat the key generation process but entering in a passphrase for a new key. I ssh-add the new key and get the "Enter passphrase" as expected. Now this is why I'm posting here and not on a Heroku blog because ssh-add fails because the passphrase I used keeps getting rejected. It appears, even though I have no problem with my keys elsewhere, that something is wrong with passphrase because even though I get no errors, I get errors when on the one that expects a passphrase. One question, should I expect the Passphrase request for ssh-add when I have not generated a passphrase? It's been suggested that this is a clue and I offer it. Or maybe I have a poor understanding of what ssh-add is doing. Wouldn't be the first time I asked a stupid Q. Also, I'm on Lion and have updated no system updates in the few weeks of this period except application updates.

    Read the article

< Previous Page | 259 260 261 262 263 264 265 266 267 268 269 270  | Next Page >