Search Results

Search found 23374 results on 935 pages for 'james may'.

Page 690/935 | < Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

  • Very poor SCSI hd performance on IBM x336 with LSI 1030 RAID1

    - by David Tschoepe
    I'm experiencing very poor performance on an IBM x336 server with dual 73GB 15k hard drives on a U320 controller, LSI 1030. We're getting maybe 3.5MB/sec max (per HD Tune utility). It should be over 100MB/sec at least, I would think (another x335 box is running 70-80MB/sec). The server was recently setup and didn't really notice the problem, but may have been there from the beginning, so not sure. I have installed the IBM ServerRAID Windows utility. The server is running Windows 2008 R2 Web edition (if that matters). I thought maybe one of the drives was bad, so far I have removed one of the drives out of the array and tested again, but still the same results. I'm waiting for the RAID1 to resync and I will try pulling the other drive next. I've also used the ServerRAID utility but haven't noticed anything in there that might indicate a problem. Not sure if I'm on the right path here. So looking for some advice to track this down.

    Read the article

  • How can I make the XAnalogTV xscreensaver fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Alternatively, as I believe the problem is aspect-ratio related, is there any way I can get the screensaver to render larger than my display, and simply discard the extra pixels? Relevant screenshots (windowed, maximized, and full-screen, respectively): You can see in the last two that the scrollbar from Firefox is clearly visible, even though this is a full-screen screensaver.

    Read the article

  • Microsoft Arc Mouse OS X

    - by meepz
    I recently bought a new Mac Book Pro with Mountain Lion 10.8 on it. The only portable mouse I have is my Microsoft Arc Mouse. I wanted to use it with the laptop so I installed the IntelliPoint 8.2 for Mac from Microsofts website. According to their website, this driver is for OS X 10.4-10.7. Now I thought that wouldnt be too much of an issue but unfortunately for me, the driver installs fine and the mouse is detected but I get no movement and when I click the buttons nothing happens. I took the mouse with me on a business trip to EU and before I left I checked if the mouse worked with my desktop which is running Windows 7 and it worked without any hiccups. I'm not too sure where the OS differs from 10.7 to 10.8. I found an article online but it doesn't pertain to my mouse, although it could be of assistance. I have tried my version of their adjustment but I am not too knowledgable on low level hardware/software modifications so I may have done it wrong. heres the link: http://refluxions.wordpress.com/2008/08/18/mac-os-x-mouse-madnessfixed/ I get the following details when I check mouse info in the IntelliPoint preferences pane: The following Microsoft mouse devices are currently connected to your Macintosh driven by the Intellipoint software. Arc Mouse Vendor name: Microsoft Product name: Microsoft AE 2.4GHz Transceiver 5.0 Vendor ID: 045E Product ID: 074F Device version: 0140 if anyone has any suggestions on how to fix this it would be greatly appreciated! I love the mouse and im here in EU for another two weeks. Thanks

    Read the article

  • Bad sectors, S.M.A.R.T., SpinRite, firmware on platter and drive id questions.

    - by Christopher Galpin
    Is it possible for S.M.A.R.T. to give false readings (say I was fiddling with lots of recovery programs, transfers, so on and so forth) or is it absolutely a read-only direct correlation to the physical status of a drive? Does SpinRite level 5 "recover bad sectors" operate on those marked at the factory? Are they on the same level as your generic bad sector, with SpinRite thus having full access? (Also I'm curious if SMART's bad sector count is zero'd afterward or if it includes factory marked sectors.) The main firmware of some drives, like a WD Passport is stored on the platter. How is it protected? Is it through marking them as bad sectors? If so, I'm wondering if SpinRite's sector recovery could bring about firmware corruption on these drives. Is the failure of a drive to report valid identity information (hdparm -I /dev/xx) consistent with corrupted firmware, or just general disk failure? I may be misunderstanding the role of firmware here. I feel I've read a drive's identity information is on the platter, just like the partition tables and so on. Is this true? (Apologizes if this is more appropriate for SuperUser.)

    Read the article

  • Devices on one switch can't see devices on another switch

    - by jockey69
    I have RoadRunner Internet cable service hooked to a Motorola Surfboard modem. This is connected to a 10/100 wireless router (located in the garage). Downstairs, I have a ZyXEL GS-108b gigabit switch connected to one port on the router. From this switch I ran connections to a PS3, DVR, Vonage box and a wireless router (Buffalo AirStation 10/100). The Buffalo AirStation works as a wireless AP for other laptops, iPads and cell phones. Upstairs, I have an Asus gigabit switch connected to a gaming desktop, printer, and a media server on FreeNAS (PS3 Media Server on FreeNAS). The router is configured to assign static IPs to both the PS3 and the media server. Problem - I connect a laptop to the switch downstairs after disabling the wireless, thus making sure that I am accessing internet through the wired connection (and the router in the garage). All my computers, iPads and cell phones are able to connect to the internet without a problem. My PS3 connects to the interent with a wired connection but is unable to access the media server (I get a message that no media server is found). I used a wired laptop downstairs (connected to switch downstairs) but am unable to ping either the PS3 or media server! I may be doing something silly but am at my wits' end. Please help!

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • Is there any special way to force GoBack to work with Windows Vista and 7?

    - by dfree
    Norton/Roxio's GoBack doesn't work with Vista/7 for reasons unknown. I have tried several alternatives (Norton Ghost, RollbackRX, Norton Save and Restore), none of which offer the same functionality as GoBack. Not only does GoBack not eat up all your hard drive space while creating a legitimate fail safe for any pc problems, it also allows you to see ACTIVELY EXACTLY WHAT PROCESS ARE BEING EXECUTED ON YOUR COMPUTER. This feature (called Advanced Disk Drive Restore) also allows you to troubleshoot problems and determine causes for things in about half a second by seeing what is happening on your machine. It's how I learned everything I know about computers. GoBack also features something called Safe Try Mode where you can put it in SafeTry and then mess up the whole computer and when you come out of it, your computer will be exactly how it was before. Amazing for people who like to tinker without risking their machine stability. It also helps for that accidentally erased paper or whatever you may have erased. I believe GoBack installs a type44 partition around the drive, which loads prior to windows to allow this functionality. If you're going to recommend another program, please don't (unless it does all of the above). I've tried all the competition and nothing is as good. I just want my GoBack to work with 7 :) Any ideas of crazy ways to make this work?

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Monitoring / metric collection for system collectives that change a lot in time (a.k.a. cloud)

    - by Florin Andrei
    When your server fleet doesn't change a lot in time, like when you're using bare-metal hosting, classic monitoring and metric collection solutions (Nagios, Munin) work well. But if the number of systems varies a lot in time, and may in fact vary rapidly, classic software is more difficult to setup and use. E.g., trying to make Nagios (monitoring) keep up with a rapidly evolving cloud infrastructure can be cumbersome. Same for Munin (metric collection). It's not just the configuration, but the way the information is conveyed to the user, or displayed, is inadequate for the cloud. What are some possible alternatives that work well with the cloud? The goals are to collect and display metrics (analog to Munin), and generate alerts when certain metrics go out of bounds or when certain services are unavailable (analog to Nagios), and do everything in a cloud-friendly manner. Some cloud providers offer monitoring / metric collection as services, but not always, and if you use more than one provider you don't want to become too dependent of just one vendor. So provider-independent solutions are required. EDIT: I am asking this question in a general fashion - not limited to any given cloud infrastructure (like OpenStack), but in the general case of using arbitrary cloud providers.

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • How to send T.38 from a mac?

    - by Brian Postow
    I'm trying to set up a fax-server on a macintosh. I have Hylafax, and we're going to use an internet FOIP fax provider (Haven't decided who yet, that may be another question). The problem is how to get from Hylafax to T.38. I know of two options, but I'm not sure how to decide between them: T38modem Advantages: It's only one extra program, and i know that I can compile it for the Mac. (well, At least I can get the H323 version working on a Mac) Disadvantages: It is mostly undocumented and seems to be supported only by one guy in Russia. IAXModem/Asterisk Advantages: It's well known, and well supported. We can pay for support. It presumably does the T38 with SIP correctly, so we don't have to worry about it. Disadvantages: It's two separate programs. While I know how to get Asterisk on a mac, I'm not sure about IAXModem. (It's sourceforge, and linux, but compiling things for a mac isn't always straight forward...) It's also mostly undocumented. Do these seem like an accurate listing of the pros/cons? Anyone have any other suggestions? thanks.

    Read the article

  • How do I connect a 2008 server to a 2003 server active directory?

    - by Matt
    Our DC is running Windows Server 2003. I've just set up Windows Server 2008 and have terminal server running on it. When setting the terminal server permissions, it was able to allow a group name that was read from the domain. In the DC the new terminal server shows up as a computer in the domain. I can also log in as a user within the domain even though that user doesn't exist locally on the new server. However, when I go to set sharing permissions on the new machine it doesn't show my domain as a location. Instead it is only looking at location "machinename" and not allowing domain to be seen or added. Is there something I'm missing? Ok, lots of errors in the event log. We have this: The winlogon notification subscriber is taking long time to handle the notification event (Logon). Followed by this: The winlogon notification subscriber took 121 second(s) to handle the notification event (Logon). Followed by: The processing of Group Policy failed because of lack of network connectivity to a domain controller. This may be a transient condition. A success message would be generated once the machine gets connected to the domain controller and Group Policy has succesfully processed. If you do not see a success message for several hours, then contact your administrator. I think this might be the same problem I'm having http://serverfault.com/questions/24420/primary-domain-controller-slow Solved. The issue was that I had changed from DHCP to static and put the wrong DNS server IP in. i.e. firewall instead of DC/DNS server.

    Read the article

  • VMware Fusion won't boot my Boot Camp partition

    - by Sean
    I have a Boot Camp partition on my MacBook that I would ultimately like to convert to a VMware virtual machine image. I've installed VMware Fusion and tried to start up my Boot Camp install using the Boot Camp button on the initial welcome screen. It brings up the "VMware Fusion is preparing your Boot Camp partition to run as a virtual machine" dialog, but afterward it shows an error dialog with the following message: Boot Camp partition preprocessing failed. You may not be able to boot your Boot Camp partition as a virtual machine. It then tries to boot the new VM, but it blue screens during the boot process. The info on the blue screen doesn't provide much in the way of help though. Running chkdsk has no effect. After searching around, some people recommended using VMware's stand-alone converter utility from within Windows to create an image, but the utility said it couldn't create an image because my disk uses a GUID Partition Table (GPT). I'm wondering if this is why it can't boot my BC partition from Fusion. Has anyone else run into this and found a fix?

    Read the article

  • Software Diagnostics Tool recommendations for Debugging a Windows 8 freeze

    - by Stuart
    I've had my HP Pavillion dv6 laptop since last November - and it has had 8GB RAM and a 256GB Crucial M4 SSD installed since the start. I use it for software development and it's had a Windows 8 RTM installation since early September. Yesterday I had to give a presentation at a customer site - so used Powerpoint for the first time since installing Win8... since that point my machine has 'frozen' every 2 hours or so after startup. There doesn't seem to be any easy to see reason behind the freeze - the system just freezes, even if I have left it idle with just a desktop there. My immediate suspicion is that the SSD is the mostly likely cause of the problem. I've looked at some of the questions on here - e.g. How do I troubleshoot hardware issues related to a computer freeze/crash? - but don't really want to start taking my laptop apart. Another suspicion is that this might be related to the WiFi adapter (Broadcom 802.11n) since I have noticed that this doesn't seem to play perfectly with things like Hyper-V in Win8. Can anyone recommend any software diagnostic tools that I can run in order to evaluate the health of the SSD or of other parts of the system? Thanks Stuart P.S. I doubt Powerpoint is the cause of this, but I may use it as an excuse never to use it again... More realistically perhaps something got damaged during travel to the customer site?

    Read the article

  • Talk on multiple IRC channels at once?

    - by TwoPixelGrid
    I seem to remember, back in '91 or so, that the console-based IRCII implemention on the Solaris box that first got me on the net would let me /Join multiple channels on a given network such that, as new channels were joined, they would start scrolling to the single console view. Let's call it the 'interleaved conversation' chat paradigm. Am I rembering this correctly? More importantly, is there a modern way of doing this in any of the GUI-based clients? I'm surprised this isn't a common desire/feature because I think it would greatly improve the experience, especially on channels with high SNR. For example, If I'm working on a project I may connect to Freenode and join : #Qt,#OpenGL,#C++. As it is now, with mIRC,Xchat, I have to manually flip between pages just to see whats being said and to reply. What I envision would go more like this (using only 2 channels for simplicity) /join #QT #OpenGL < [QT] QtChannelUser: Hello TwoPixelGrid. < [OpenGL] OpenGLChannelUser: Hi there TwoPixelGrid. @QT: Hi QtChannelUser @OpenGL: Hello againOpenGLChannelUser And this message is going out to all my channels. Do I have to write a new client or is this already out there?

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • Volume licenced copy of MS Office 2007 shows "Non Commercial Use" in title bar

    - by Linker3000
    I have just removed the demo copy of Office 2007 preinstalled on a new laptop and replaced it with an install of the full professional edition downloaded from the MS Volume Licensing site and installed one of our volume licence keys, yet the apps (Word etc.) show "Non Commercial Use" in the title bar, which is what usually happens in the Home and Student edition. I have tried: Deleting the Office registration keys in the registry and using one of our other Office 2007 volume licence keys (we have 7) when prompted to re-register Uninstalling Office completely and reinstalling it from a newly-downloaded ISO burned to CD and also from a compressed file that installs from hard disk/USB stick (both from Microsoft - no dodgy stuff) Yet the non-commercial message persists. Although it's a cosmetic issue, the laptop is going to be used for customer presentations and so the sales person is rightly concerned about the image this portrays. I presume there may be something floating around the registry or in a file somewhere but I can't find it. Articles I have found elsewhere just refer to the message being related to the use of a Home and Student licence key, which is 100% not the case. Any thoughts? Thanks.

    Read the article

  • Apache: rewrite port 80 and 443 - multiple SSL vhosts setup

    - by Benjamin Jung
    SETUP: multiple SSL domains are configured on a single IP, by using vhosts with different port numbers (on which Apache listens) Apache 2.2.8 on Windows 2003 (no comments on this pls) too many Windows XP users so SNI isn't an option yet There may be reasons why it's wrong to use this approach, but it works for now. vhosts setup: # secure domain 1 <VirtualHost IP:443> SSL stuff specifying certificate etc. ServerName domain1.org </VirtualHost> # secure domain 2 <VirtualHost IP:81> SSL stuff for domain2.org ServerName domain2.org </VirtualHost> GOAL: Some folders inside the domain2.org docroot need to be secure. I used a .htaccess file to rewrite the URL to https on port 81: RewriteEngine On RewriteCond %{SERVER_PORT} !^81$ RewriteRule (.*) https://%{HTTP_HOST}:81%{REQUEST_URI} [R] Suppose I put the .htaccess in the folder 'secfolder'. When accessing http://domain2.org/secfolder this gets succesfully rewritten to https://domain2.org:81/secfolder. ISSUE: When accessing https://domain2.org/secfolder (without port 81), the certificate from the first vhost (domain1.org) is used and the browser complains that the site is insecure because the certificate is not valid for domain2.org. I thought that RewriteCond %{SERVER_PORT} !^81$ would also rewrite https://domain2.org to https://domain2.org:81, but it doesn't. It seems that the .htaccess file is not being used at all in this case. At this point I am not sure how to apply a RewriteRule to https://domain2.org. I tried creating an additional vhost for domain2 on port 443 before the one for domain1.org, but Apache seems to choke on that. I hope someone of you has an idea how to approach this. TIA.

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Using 1and1.com Servers, SMTP Mail is Limited - Local XAMPP Server Works As Expected

    - by nicorellius
    I'm starting to not like 1and1.com that much. I've used them for years, but mainly for simple sites without much need for configuration. I know there are better hosting companies out and I may go seeking them. The problem here is that on my Local XAMPP server (sitting on a network with Comcast ISP), I have a PHP script that uses PEAR::Mail to send mail using MIME. The script works find locally with either smtp.1and1.com and corresponding credentials and smtp.gmail.com with corresponding credentials, using appropriate ports, etc. 1and1 tells me that I have to change the MX record on the domain where this script runs in order to make this work. This doesn't make sense to me. Now I'm pretty new to all this, but how is it that this is the case? Why can my local server work just fine, out of the box, but their servers not? I have asked them these questions, but they are very vague and I cannot get any good answers from them. Versions: PEAR Version: 1.5.0 PHP Version: 4.4.9 Zend Engine Version: 1.3.0 My apologies in advance for my ignorance. Thanks for the help in advance.

    Read the article

  • CIFS mounted drive setting "stick-bit" on all files, cannot change permissions or modify files

    - by mattmcmanus
    I have a folder mounted on an Ubuntu 8.10 sever through cifs that I simply cannot change the permissions on once mounted. Here is a breakdown of what's going on: All files within the mounted folder automatically have their permissions set to -rwxrwSrwx regardless of whether the file is create on the windows server or on the linux machine. I have the same directory mounted on two other linux servers (both running 9.10 instead of 8.10) with no problems at all. They all are using the same fstab options and the same credentials. //server/folder /media/backups cifs credentials=/etc/samba/.arcadia_cred,noexec,noserverino 0 0 I've I run a chmod command a million different ways, all of which report successfully changing the permissions. However it doesn't. The issue began after I updated from 8.04 to 8.10 Any idea why this may be happening on one machine? Since it started after an upgrade I'm not sure what is the bes thing to do. Any help you could give would great! None of my automated backup scripts are working because of this!

    Read the article

  • Enable FTP on OS X 10.8 Mountain Lion Server

    - by Oleg Trakhman
    There is a LAN comprising several mac machines (iMac, Mac Pro, macbook etc.), Airport Express router and Mac Mini Server running OS X Server 10.8 (Mountain Lion Server). I need to share a folder on Mac Mini Server by FTP. What did I try so far: Made special partition for FTP Access, call it "Reports" So shared folder would be "/Volumes/Reports" Gave access every user and group in system, and also enabled guest access. I checked posix acl, which is "rwxrwxrwx", I checked sharing settings in "Preferences.app" and "Server.app" Checked that users have access to FTP service Enabled FTP in Server.app I tried access to shared folder (by FTP): via Cyberduck via Finder via shell: ftp server.local And what I got: $ ftp [email protected] Trying 10.0.2.2... Connected to server.local. 220 10.0.2.2 FTP server (tnftpd 20100324+GSSAPI) ready. 331 User ftpuser accepted, provide password. Password: 530 User ftpuser may not use FTP. and $ ftp [email protected] Trying 10.0.2.2... Connected to server.local. 220 10.0.2.2 FTP server (tnftpd 20100324+GSSAPI) ready. 331 User admin accepted, provide password. Password: 530 User admin denied by SACL. ftp: Login failed ftp> (admin is administrator account , ftpuser is special user account made to access ftp) What I'm doing wrong? Getting really tired of this...

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

< Previous Page | 686 687 688 689 690 691 692 693 694 695 696 697  | Next Page >