Search Results

Search found 15248 results on 610 pages for 'slashdot configuration'.

Page 494/610 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Flash Backed Write Cache (FBWC) without capacitor pack

    - by Martyn
    I brought a HP Smart Array P410 controller and it is installed and working fine in a HP Prolient Microserver with 4 drives in two RAID 1 arrays. I didn’t realise however that it came without any cache so would only work by directly writing straight to the disk, and the performance was horrible. So I then brought the 512MB Flash Backed Write Cache (FBWC) memory module as I was under the impression that with FBWC I would not need a battery. I got this idea from a forum post. "What do you guys think of the choice between 'BBWC' (battery backed write cache) and 'FBWC' (flash backed write cache)? The flashed based ones use non-volitile memory so need no battery." After installing the cache module however the server pretty much won’t boot. The P410 has a flashing amber light on it, and from the manual that doesn’t sound good. I’ve managed to get to the on board BIOS once and even managed to get to boot to the HP Array Configuration Utility (ACU) CD once, but every other time the Server continuingly reboots once it get to the POST screen and reads ARRAY INITILIZING %%%. The one time I reached the ACU, it reported a problem with the Cache Module. To me, it seems like the cache module is faulty, however the supplier tells me “Do you have an FBWC battery pack, p/n 587324-001, because that is required for the cache to work. If you have it, please complete an RMA form and we'll send a replacement / credit.” Does this sound right to you? I’ve been ordering the parts from the US and I don’t want to spend $77 + $40 p&p on a battery, wait a week for the shipping to find the card is faulty, and I don’t want to send back a working card?

    Read the article

  • .htaccess, mod_rewrite Issue

    - by Shoaibi
    What i want: Force www [works] Restrict access to .inc.php [works] Force redirection of abc.php to /abc/ Removal of extension from url Add a trailing slash if needed old .htaccess : Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteRule ^(.*)/$ /$1.php [L,R=301] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ http://www.example.net/$1/ [R=301,L] </IfModule> New .htaccess: Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / ### Force www RewriteCond %{HTTP_HOST} ^example\.net$ RewriteRule ^(.*)$ http://www\.example\.net/$1 [L,R=301] ### Restrict access RewriteCond %{REQUEST_URI} ^/(.*)\.inc\.php$ [NC] RewriteRule .* - [F,L] #### Remove extension: RewriteCond %{REQUEST_FILENAME} \.php$ RewriteCond %{REQUEST_FILENAME} -f RewriteRule (.*)\.php$ /$1/ [L,R=301] #### Map pseudo-directory to PHP file RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule (.*) /$1.php [L] ######### Trailing slash: RewriteCond %{REQUEST_FILENAME} -d RewriteCond %{REQUEST_FILENAME} !/$ RewriteRule (.*) $1/ [L,R=301] </IfModule> errorlog: Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.example.net/ Rewrite.log: http://pastebin.com/x5PKeJHB

    Read the article

  • OpenSSL error while running punjab

    - by Hunt
    i ran punjab - BOSH connection manager - using twistd -y punjab.tac command in my centos but i am getting following error Unhandled Error Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 652, in run runApp(config) File "/usr/local/lib/python2.7/site-packages/twisted/scripts/twistd.py", line 23, in runApp _SomeApplicationRunner(config).run() File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 386, in run self.application = self.createOrGetApplication() File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 451, in createOrGetApplication application = getApplication(self.config, passphrase) --- <exception caught here> --- File "/usr/local/lib/python2.7/site-packages/twisted/application/app.py", line 462, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/local/lib/python2.7/site-packages/twisted/application/service.py", line 405, in loadApplication application = sob.loadValueFromFile(filename, 'application', passphrase) File "/usr/local/lib/python2.7/site-packages/twisted/persisted/sob.py", line 210, in loadValueFromFile exec fileObj in d, d File "punjab.tac", line 39, in <module> '/etc/pki/tls/cert.pem', File "/usr/local/lib/python2.7/site-packages/twisted/internet/ssl.py", line 68, in __init__ self.cacheContext() File "/usr/local/lib/python2.7/site-packages/twisted/internet/ssl.py", line 78, in cacheContext ctx.use_privatekey_file(self.privateKeyFileName) OpenSSL.SSL.Error: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')] Failed to load application: [('x509 certificate routines', 'X509_check_private_key', 'key values mismatch')] my configuration file of punjab is sslContext = ssl.DefaultOpenSSLContextFactory( '/etc/pki/tls/private/ca.key', '/etc/pki/tls/cert.pem', ) How can i resolve above error

    Read the article

  • Error when mount the database in exchange 2010 SP1

    - by user64060
    Hi, My company have two exchange 2010 SP1 servers with DAG configuration with OS widows server 2008 R2 in testing entironment. Today i want to test my backup possibility, so i restore the backup data to another location not original location. I dismount the database and then delete the all files under the database location. last I copy back the files from back up location to database location. When i want to mount the database. It will come out the below error! -------------------------------------------------------- Microsoft Exchange Error -------------------------------------------------------- Failed to mount database 'mail2'. mail2FailedError: Couldn't mount the database that you specified. Specified database: mail2; Error code: An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn]. An Active Manager operation failed. Error: The database action failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Database: mail2, Server: mail2.e0594.cn] An Active Manager operation failed. Error: Operation failed with message: MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) [Server: mail2.e0594.cn] MapiExceptionCallFailed: Unable to mount database. (hr=0x80004005, ec=1011) Any suggestion? Thanks!

    Read the article

  • What is the reason for this DNSSEC validation failure of dnsviz.net?

    - by grifferz
    On trying to resolve dnsviz.net from a host using an Unbound resolver that is configured to use DNSSEC validation, the result is "no servers could be reached": $ dig -t soa dnsviz.net ; <<>> DiG 9.6-ESV-R4 <<>> -t soa dnsviz.net ;; global options: +cmd ;; connection timed out; no servers could be reached Nothing is logged by Unbound to suggest why this is the case. Here is the /etc/unbound/unbound.conf: server: verbosity: 1 interface: 192.168.0.8 interface: 127.0.0.1 interface: ::0 access-control: 0.0.0.0/0 refuse access-control: ::0/0 refuse access-control: 127.0.0.0/8 allow_snoop access-control: 192.168.0.0/16 allow_snoop chroot: "" auto-trust-anchor-file: "/etc/unbound/root.key" val-log-level: 2 python: remote-control: control-enable: yes If I add: module-config: "iterator" (thus disabling DNSSEC validation) then I am able to resolve this host normally. The domain and its DNSSEC check out fine according to http://dnscheck.iis.se/ so there must be something wrong with my resolver configuration. What is it and how do I go about debugging that?

    Read the article

  • Capabilities of business and SOHO routers

    - by Q8Y
    I'm currently studying for the CCNA certifications (especially for Cisco routers and configuration). I know that business routers provide more features than SOHO routers, the processing speed and RAM can be enough. Assume I need to connect a number of users through a network (accessing internet, share files, printers, ...). I have a high speed connection to the internet and I already applied QoS. How can I find out how many users such a single (SOHO) router could handle? In my case I'd attach to it multiple switches until I have the number of ports needed. Would everything work well and smoothly with 50 users? What about 300? At which point would I need a business router instead? If I implemented VLAN here, would it make any difference in the performance? When do I really need to use more than one router? (Both SOHO and business) I'm thinking that I may need them only if I want to increase the performance (instead of replacing the existing one) and if I have multiple locations, so in this situation I need to have multiple routers, right? Put differently: Is there is a need to have another router if my business all in one place?

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • How can I change the binding order of network adapters in Windows 7?

    - by Chris Farmer
    The end goal here is that I am trying to install an Oracle 10g server on my Windows 7 x64 dev box. I use DHCP, and the Oracle installer is throwing up this warning: Checking Network Configuration requirements ... Check complete. The overall result of this check is: Failed <<<< Problem: The install has detected that the primary IP address of the system is DHCP-assigned. Recommendation: Oracle supports installations on systems with DHCP-assigned IP addresses; However, before you can do this, you must configure the Microsoft LoopBack Adapter to be the primary network adapter on the system. See the Installation Guide for more details on installing the software on systems configured with DHCP. I have installed the loopback adapter, but I am not sure how to make it the primary network adapter. I see this Microsoft KB article on the subject but it's Windows XP-oriented, and I can't seem to find a comparable one for Windows 7. Some of the options it talks about don't seem to be present in the views of the adapters that I see. So, how can I make the loopback adapter become the primary adapter?

    Read the article

  • Sending mails via Mutt and Gmail: Duplicates

    - by Chris
    I'm trying to setup mutt wiht gmail for the first time. It seems to work pretty well, however when I send a mail from Mutt i appears twice in Gmail's sent folder. (I assume it's also sent twice - I'm trying to validate that) My configuration (Stripped of coloring): # A basic .muttrc for use with Gmail # Change the following six lines to match your Gmail account details set imap_user = "XX" set smtp_url = "[email protected]@smtp.gmail.com:587/" set from = "XX" set realname = "XX" # Change the following line to a different editor you prefer. set editor = "vim" # Basic config, you can leave this as is set folder = "imaps://imap.gmail.com:993" set spoolfile = "+INBOX" set imap_check_subscribed set hostname = gmail.com set mail_check = 120 set timeout = 300 set imap_keepalive = 300 set postponed = "+[Gmail]/Drafts" set record = "+[Gmail]/Sent Mail" set header_cache=~/.mutt/cache/headers set message_cachedir=~/.mutt/cache/bodies set certificate_file=~/.mutt/certificates set move = no set include set sort = 'threads' set sort_aux = 'reverse-last-date-received' set auto_tag = yes hdr_order Date From To Cc auto_view text/html bind editor <Tab> complete-query bind editor ^T complete bind editor <space> noop # Gmail-style keyboard shortcuts macro index,pager y "<enter-command>unset trash\n <delete-message>" "Gmail archive message" macro index,pager d "<enter-command>set trash=\"imaps://imap.googlemail.com/[Gmail]/Bin\"\n <delete-message>" "Gmail delete message" macro index,pager gl "<change-folder>" macro index,pager gi "<change-folder>=INBOX<enter>" "Go to inbox" macro index,pager ga "<change-folder>=[Gmail]/All Mail<enter>" "Go to all mail" macro index,pager gs "<change-folder>=[Gmail]/Starred<enter>" "Go to starred messages" macro index,pager gd "<change-folder>=[Gmail]/Drafts<enter>" "Go to drafts" macro index,pager gt "<change-folder>=[Gmail]/Sent Mail<enter>" "Go to sent mail" #Don't prompt on exit set quit=yes ## ================= #Color definitions ## ================= set pgp_autosign

    Read the article

  • Password-less login into localhost

    - by Brad
    I am trying to setup password-less login into my localhost because it's required for a tutorial. I went through the normal steps of generating an rsa key and appending the public key to authorized_keys but I am still prompted for a password. I've also enabled RSAAuthentication and PubKeyAuthentication in /etc/ssh_config. Following other suggestions I've seen, I tried: chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys But the problem persists. Here is the output from ssh -v localhost: (tutorial)bnels21-2:tutorial bnels21$ ssh -v localhost OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /Users/bnels21/.ssh/id_rsa type 1 debug1: identity file /Users/bnels21/.ssh/id_rsa-cert type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 1c:31:0e:56:93:45:dc:f0:77:6c:bd:90:27:3b:c6:43 debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/bnels21/.ssh/known_hosts:11 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/bnels21/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering RSA public key: id_rsa3 debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/bnels21/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Any suggestions? I'm running OSX 10.8.

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • Reusing slot numbers in Linux software RAID arrays

    - by thkala
    When a hard disk drive in one of my Linux machines failed, I took the opportunity to migrate from RAID5 to a 6-disk software RAID6 array. At the time of the migration I did not have all 6 drives - more specifically the fourth and fifth (slots 3 and 4) drives were already in use in the originating array, so I created the RAID6 array with a couple of missing devices. I now need to add those drives in those empty slots. Using mdadm --add does result in a proper RAID6 configuration, with one glitch - the new drives are placed in new slots, which results in this /proc/mdstat snippet: ... md0 : active raid6 sde1[7] sdd1[6] sda1[0] sdf1[5] sdc1[2] sdb1[1] 25185536 blocks super 1.0 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU] ... mdadm -E verifies that the actual slot numbers in the device superblocks are correct, yet the numbers shown in /proc/mdstat are still weird. I would like to fix this glitch, both to satisfy my inner perfectionist and to avoid any potential sources of future confusion in a crisis. Is there a way to specify which slot a new device should occupy in a RAID array? UPDATE: I have verified that the slot number persists in the component device superblock. For the version 1.0 superblocks that I am using that would be the dev_number field as defined in include/linux/raid/md_p.h of the Linux kernel source. I am now considering direct modification of said field to change the slot number - I don't suppose there is some standard way to manipulate the RAID superblock?

    Read the article

  • Conditionally set an Apache environment variable

    - by Tom McCarthy
    I would like to conditionally set the value of an Apache2 environment variable and assign a default value if one of the conditions is not met. This example if a simplification of what I'm trying to do but, in effect, if the subdomain portion of the host name is hr, finance or marketing I want to set an environment var named REQUEST_TYPE to 2, 3 or 4 respectively. Otherwise it should be 1. I tried the following configuration in httpd.conf: <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com DocumentRoot /var/www/html SetEnv REQUEST_TYPE 1 SetEnvIfNoCase Host ^hr\. REQUEST_TYPE=2 SetEnvIfNoCase Host ^finance\. REQUEST_TYPE=3 SetEnvIfNoCase Host ^marketing\. REQUEST_TYPE=4 </VirtualHost> However, the variable is always assigned a value of 1. The only way I have so far been able get it to work is to replace: SetEnv REQUEST_TYPE 1 with a regular expression containing a negative lookahead: SetEnvIfNoCase Host ^(?!hr.|finance.|marketing.) REQUEST_TYPE=1 Is there a better way to assign the default value of 1? As I add more subdomain conditions the regular expression could get ugly. Also, if I want to allow another request attribute to affect the REQUEST_TYPE (e.g. if Remote_Addr = 192.168.1.[100-150] then REQUEST_TYPE = 5) then my current method of assigning a default value (i.e. using the regular expression with a negative lookahead) probaby won't work.

    Read the article

  • Slow RDP after server joins domain

    - by Chris Grove
    We're having RDP issues with Amazon cloud servers that we recently joined to an Active Directory domain. The setup is: A local office network A virtual private cloud in Amazon An IPSec tunnel between the two networks A number of Windows 2008 R2 servers on both networks An AD domain (call it abc.net), with one domain controller in each network. The domain controllers are both new, fresh installs. Before we had the domain set up we had local accounts for the cloud computers which were used for RDP access. Our idea was to get all of the servers on to the domain so we could use domain logins instead of per-server local logins. Before the cloud servers were in the domain, RDP (from the office network or through a VPN to the cloud) worked great. After we joined the cloud servers to the domain, RDP from the office became very slow - a few minutes to log in, long frequent pauses when the interface is unresponsive, generally just a slow and frustrating experience. This is a problem regardless of whether a domain or local login is used for RDP. Oddly, when outside of the office network and connecting to the cloud directly with the VPN, RDP is still very responsive. Any idea why RDP from office to cloud is suddenly very slow after the cloud servers join the domain? What can I look at in our configuration to address this? Any help is greatly appreciated.

    Read the article

  • samba access from win98

    - by SimonSalman
    Hello, the admin installed a new file server in our institute: OpenSuse 11.1 with Samba 3.2.7-11.3.2-2154-SUSE-CODE11. They copied the smb.conf from the old machine (hosting Samba 3.0.0) to the new one. Everything works as before, but one Windows 98 machine can see but not access the file server. It prompts for user authentication, but will not accept any user-password combination. There exists a lot of discussion about the problem on the net, but none provided a clear answer to the problem. EDIT: 1. I changed Win98 registry enable plain-text passwords, and alternatively changed server's smb.conf and /etc/smbpasswd to accept encrypted passwords 2. Further I provide a profile with a user-password combination on Win98 machine similar to one of the samba users-password combinations. 3. I changed smb.conf such that the samba server is the Local Master Browser all these changes are not necessary when using the older samba server. So, I conclude that a configuration problem on the server side is likely. If you need any further information, I will post them here. Best regards, Simon

    Read the article

  • Issues connecting to HP ProCurve switches

    - by BriGuy
    We are having a very strange issue trying to connect to our infrastructure switches via SSH. When you first try connecting to them, the switches will prompt for the password - and then just sit there after it is entered. If you create a second SSH session to the switch (while letting the first one remain open and just sitting there) it will let you log right in. The switches are doing the same thing with RADIUS and local authentication. The other strange part to all of this, is that about 10 switches started doing it all at the same time. As far as actual configuration of the switches, nothing has changed. Occasionally, one switch will start working like normal, but then stop again. These are all HP ProCurve managed switches, but all different models/firmware. Some switches that are not working are using the same firmware as others that are working. UPDATE: 20130312 I am also seeing this same behavior when trying to use telnet. The first telnet session just hangs there, and the second telnet session will let me log in. Rebooting the switches seems to get them working, but I still have 5 production switches that cannot easily be rebooted because of their production roles. Is anyone aware of anything else that can be switched on/off that may reset the logon for remote management or something like that?

    Read the article

  • Having two IP Routes/Gateways of last Resort on an HP Switch

    - by SteadH
    We have an HP Layer 3 Switch that is doing IP routing between vlans. The general set up is that the switch has an IP address on each VLAN and IP routing is enabled. On our servers VLAN, we have a firewall that has a connection to the outside world. To set a IP route on the HP router, we use IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.1 where 192.168.2.1 is the address of our firewall, and the zeros essentially mean to route all traffic that the switch doesn't know what to do with out the firewall as a gateway. We're in the middle of an ISP and firewall change. I set up the new firewall and ran the IOS command ip route 0.0.0.0 0.0.0.0 192.168.2.254 (the address of the new firewall). Things started working nicely. When I reviewed the configuration of the switch though, I noticed that it did not replace the previous ip route command, but just added another route. Now, I know how to remove the old firewall route (no ip route 0.0.0.0 0.0.0.0 192.168.2.1), but what is the effect of having these two 0.0.0.0 routes? Is it switch implosion? Will a server just respond back over the route it receives the request from? I've read elsewhere that having two default gateways is an impossibility by definition, but I'm curious about this situation that our switch allowed. Thanks!

    Read the article

  • Remote access to phpmyadmin from computer belongs to same LAN

    - by Charles
    OK... I solved it. It is because I have not configured the httpd.conf to allow the centos listen port 80 and 8080. Listen 80 Listen 8080 I have setup the myphpadmin on my CentOS 6.4 recently. I can access and login to the myphpadmin on my localhost. However, when I type http://[hostipaddr]/phpmyadmin on my other computer in the same LAN with the CentOS, the browser simply cannot access the page. Below are some of the current configuration. Anyone can help please......? config.inc.php $i++; /* Authentication type */ $cfg['Servers'][$i]['auth_type'] = 'http'; /* Server parameters */ $cfg['Servers'][$i]['host'] = 'localhost'; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['compress'] = false; /* Select mysql if your server does not have mysqli */ $cfg['Servers'][$i]['extension'] = 'mysql'; $cfg['Servers'][$i]['AllowNoPassword'] = false; phpmyadmin.conf <Directory /var/www/html/phpmyadmin/> order allow,deny allow from all </Directory> Furthermore, I can access the webpage that stored in the CentOS from my other computer without problems. After using wireshark and tcpdump, I found that the server (the Cent OS) keep resetting the connection. (192.168.1.106 is my other computer, 192.168.1.101 is my CentOS) 23:29:42.281473 IP 192.168.1.106.55999 > 192.168.1.101.webcache: Flags [S], seq 2559409090, win 65535, options [mss 1460,nop,wscale 8,nop,nop,sackOK], length 0 23:29:42.281504 IP 192.168.1.101.webcache > 192.168.1.106.55999: Flags [R.], seq 0, ack 2559409091, win 0, length 0 I have disabled the iptables service on the CentOS already.

    Read the article

  • Windows Explorer and UAC: run elevated

    - by syneticon-dj
    I am profoundly annoyed by UAC and switch it off for my admin user wherever I can. Yet, there are situations where I can't - especially if those are machines not under my continuous administration. In this case, I am always challenged with the task of traversing directories using my administrative user via the Windows Explorer where regular users do not have "read" permissions. The possible two approaches to this problem so far: change the ACLs to the directory in question to include my user (Windows conveniently offers the Continue button in the "You don't currently have permissions to access this folder" dialog. This obviously sucks since more often than not I do not want to change ACLs but just look into the folder's contents use an elevated cmd.exe prompt along with a bunch of command line utilities - this usually takes a lot of time when browsing through large and / or complex directory structures What I would love to see would be a way to run Windows Explorer in elevated mode. I have yet to find out how to do so. But other suggestions solving this problem in an unobtrusive way without changing the entire system's configuration (and preferably without the need for downloading / installing anything) are very welcome, too. I have seen this post with a suggestion for altering HKCR - interesting, but it changes the behavior for all users, which I am not allowed to do in most situations. Also, some folks have suggested using UNC paths to access the folders - unfortunately this does not work when accessing the same machine (i.e. \\localhost\c$\path) as the "Administrators" group membership is still stripped from the token and a re-authentication (and thus the creation of a new token) would not happen when accessing localhost.

    Read the article

  • How to manage unprivileged administration of system services using Debian?

    - by ypnos
    At our lab, we have several services handled by different phd students (like myself). Fluctuation is high and people do the job next to their research duties. Until now, services were running on different machines, with different OS setups that can result in administration hell quickly. We want to consolidate our service setup. Our main idea is that the guys responsible for the services should not meddle with the underlying system anymore. Apart from core systems like NFS and kerberos, a typical service is able to run as non-root already. I'm talking about apache, mysql, subversion, mail with openxchange, and so on. Redirecting privileged ports is also no issue (source). What is left is the configuration of the service and its payload. One scenario we envisioned is that every service has its own user and home directory, accessable by the corresponding admins. Backup and fallback of the service is easy, as everything needed for the service to run is found in one place. Are there established ways to create such a setup? Does a mostly unique method exist to make services find their files (other than in system directories) while still using the corresponding debian packages? Are there any catches with our idea that we may have overlooked? Would you maybe claim that virtualization is the answer to our problem? (In our POV, it wouldn't help us keeping system setup strictly separated from service setup.) Thank you for any advice!

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Backup data from RAID 1 disk out of its server

    - by Doomsday
    I'm facing with a pretty easy problem in my opinion. I've extracted a working disk from a RAID1 and I'm looking to copy only data (FS and RAID configuration doesn't matter) into another location (another FS). My problem is I'm not able to mount properly this disk into another linux. I've first looked the partition table : # fdisk -l /dev/sdc Disk /dev/sdc: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 1249535699 624767818+ fd Linux raid autodetect /dev/sdc2 1249535700 1250017649 240975 fd Linux raid autodetect /dev/sdc3 1250017650 1250258624 120487+ 82 Linux swap / Solaris I've understood I should use dmraid tools. Once installed : # cat /proc/mdstat Personalities : md0 : inactive sdc1[1](S) 624767744 blocks unused devices: <none> And some other informations : # mdadm --examine /dev/sdc1 /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 8f292f54:7e5aef72:7e5ab5fd:b348fd05 Creation Time : Mon Jun 2 03:39:41 2008 Raid Level : raid1 Used Dev Size : 624767744 (595.82 GiB 639.76 GB) Array Size : 624767744 (595.82 GiB 639.76 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Tue Feb 7 22:34:59 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a505b324 - correct Events : 15148 Number Major Minor RaidDevice State this 1 8 1 1 active sync /dev/sda1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 1 1 active sync /dev/sda1 From here, I've tried to mount but I'm not comfortable with dmtools and how it's working. # mount /dev/sdc1 /mnt/sdc1 mount: unknown filesystem type 'linux_raid_member' # mount /dev/md0 /mnt/sdc1 mount: /dev/md0: can't read superblock I've seen some options to alter RAID array with mdadm but I only want to copy data on its filesystem before wiping them... Anyone has a clue ?

    Read the article

  • Windows Not Honoring DHCP Scope

    - by jerhinesmith
    Please bear with me as I'm not a networking person by trade. Our current configuration at work includes two Windows Servers serving as DHCP/Active Directory servers (if that makes sense) -- one replicating from the other. On both machines, the DNS resolution is set up as: Main Windows Box (10...* address) Public IP Address (for Verizon) Public IP Address (secondary Verizon) Secondary Windows Box (10...* address) Assuming our domain is foo.com, we maintain the foo.com website on a hosted VPS with it's own IP address. The problem is that even though bar.foo.com is an internal server and is defined in DNS on the Primary Windows machine, when I ping bar or even bar.foo.com it resolves to the hosted IP address instead of the 10.* address. I tried taking both of the Public IP addresses out of the DHCP scope, and that seemed to work, but it completely slowed down access to any external sites, so that wasn't acceptable. I also tried adding the two Windows machine as the DNS servers on my desktop. That too worked, but I'd rather not have everything enter their DNS servers, as the above setup should theoretically be working. Is there anything I could check to see why pinging bar.foo.com isn't resolving to the DNS entry on the Windows machines? Here's a summary of the ping results, if they help: Pinging from servers with static IP bar.foo.com resolves with correct IP address Pinging from linux machines not joined to the domain bar.foo.com resolves with correct IP address Pinging from user's desktop machines, joined to the domain, but dynamic IP bar.foo.com resolves with incorrect IP address This is driving me crazy!

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

  • Trouble getting started with the STEALTH monitoring package

    - by dlanced
    Is anyone here familiar with the Linux-based STEALTH package (for monitoring FS integrity of client systems)? I'm trying to get started with a very simple configuration, but I'm running into trouble (this is running under Ubuntu 14.04): Config line `USE BASE/root/stealth/10.0.0.79' invalid STEALTH (2.11.02) started at Fri, 30 May 2014 15:25:00 +0000 Program terminated due to non-zero exit value for -type f -exec /usr/bin/sha1sum {} \; (EOC Fri May 30 15:25:00 2014 127) Stealth is creating a binary tmp file in the Stealth server root and generating a "report" file in the start directory, but not much else. Regarding the "USE BASE...invalid" error, and just to be sure, I manually created the directories in /root, but it didn't help. And, by the way, I am running stealth with sudo. Everything seems to be configured correctly: I'm able to ssh into root@client from the stealth machine without a password Here's my "policy" file (I've removed the email directives just for simplicity): DEFINE SSHCMD /usr/bin/ssh [email protected] -T -q exec /bin/bash --noprofile DEFINE EXECSHA1 -xdev -perm +u+s,g+s ( -user root -or -group root ) \ -type f -exec /usr/bin/sha1sum {} \; USE BASE/root/stealth/10.0.0.79 USE SSH ${SSHCMD} USE DD /bin/dd USE DIFF /usr/bin/diff USE PIDFILE /var/run/stealth- USE REPORT report USE SH /bin/sh GET /usr/bin/sha1sum /root/tmp LABEL \nchecking the client's /usr/bin/find program CHECK LOG = remote/binfind /usr/bin/sha1sum /usr/bin/find LABEL \nsuid/sgid/executable files uid or gid root on the / partition CHECK LOG = remote/setuidgid /usr/bin/find / ${EXECSHA1} LABEL \nconfiguration files under /etc CHECK LOG = remote/etcfiles \ /usr/bin/find /etc -type f -not -perm /6111 \ -not -regex "/etc/(adjtime\|mtab)"\ -exec /usr/bin/sha1sum {} \; Any ideas? Thanks,

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >