Search Results

Search found 12283 results on 492 pages for 'tcp port'.

Page 408/492 | < Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >

  • How do I install OpenStack on a single Ubuntu 12.04 node?

    - by Sam Edwards
    I'm having trouble installing OpenStack in Ubuntu 12.04, for various reasons: The official Ubuntu website recommends Juju and MAAS. However, this is a single node I am trying to get OpenStack installed on, and MAAS requires "two or more nodes" according to the docs. Additionally, I don't have any experience in MAAS and Juju and would rather stick to technologies I am more familiar with so that I can debug problems that arise. I have tried StackGeek but this fails because the node only has a single Ethernet port. The node does, however, have the second hard drive required for the nova storage. I have tried DevStack but cannot log into the dashboard. The login form appears fine, but as soon as I try to submit the page, my browser begins loading indefinitely. I have tried installing straight from packages, but I get an Internal Server Error in the dashboard upon trying to log in, with no helpful logs anywhere in sight to aid me in debugging the issue. Each of these attempts was with a fresh Ubuntu 12.04 LTS setup; I'm finding it really strange that no matter what I try, I cannot get OpenStack installed. Is this even a stable/mature project? Why am I encountering so many bugs?

    Read the article

  • XAMPP - Apache service stops running after few seconds.

    - by Fábio Antunes
    Hello I have this big problem with my Xampp server, for some reason the Apache service stops running after a few seconds it as been started, and i have no idea what the problem is, and the error logs don't say much about the problem. [Fri May 07 01:09:32 2010] [notice] Digest: generating secret for digest authentication ... [Fri May 07 01:09:32 2010] [notice] Digest: done [Fri May 07 01:09:33 2010] [notice] Apache/2.2.14 (Win32) DAV/2 mod_ssl/2.2.14 OpenSSL/0.9.8l mod_autoindex_color PHP/5.3.1 mod_apreq2-20090110/2.7.1 mod_perl/2.0.4 Perl/v5.10.1 configured -- resuming normal operations [Fri May 07 01:09:33 2010] [notice] Server built: Nov 11 2009 14:29:03 [Fri May 07 01:09:33 2010] [crit] (22)Invalid argument: Parent: Failed to create the child process. [Fri May 07 01:09:33 2010] [crit] (OS 6)O identificador é inválido. : master_main: create child process failed. Exiting. [Fri May 07 01:09:33 2010] [notice] Parent: Forcing termination of child process 36 identificador é inválido (pt_PT) = identifier is invalid. Note: No other applications is using the Apache port. I have done some changes to the httpd.conf file but, it as worked well for allot of time. Added some virtual hosts. Enabled xdebug. As this happen to anyone, that could tell me whats the problem? Thanks for your time.

    Read the article

  • Best Firewall product for hosting/housing environment?

    - by Raffael Luthiger
    I am searching for a firewall product (appliance or software) for an hosting/housing environment. The biggest problem is that the rules get very complex as more customers are behind the firewall. Some have only one server, others have a whole subnet. Some need NAT, some a VPN endpoint. Some customers want to only allow port http, others ssh as well. So the device needs to be able to support VLANs and it should be possible to group the rules per customer. Speed is another important point. And being able to manage redundant devices easily. I am searching for something that doesn't have all the extras like spam filter etc. I was searching a lot on the net but either they had all those extras as well (and with is an overloaded configuration interface) or they missed some of the features I need (e.g. VLAN). The VPN endpoint is not the an important criteria. We were thinking about a separate machine for it.

    Read the article

  • Ubuntu 10.04: Unable to Start RabbitMQ Server Post-Installation

    - by Garland W. Binns
    After installing RabbitMQ on Ubuntu 10.04 I receive a failure message that the service was unable to start. Any insight into the issue would be greatly appreciated! Below are contents of startup_log and startup_err. Startup_log: {error_logger,{{2012,7,7},{15,50,31}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,etimedout}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2012,7,7},{15,50,31}},crash_report,[[{initial_call,{net_kernel,init,['Argument__1']}},{pid,<0.20.0>},{registered_name,[]},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{ancestors,[net_sup,kernel_sup,<0.9.0>]},{messages,[]},{links,[#Port<0.100>,<0.17.0>]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,24},{reductions,512}],[]]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[[rabbitmqprelaunch877,shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2012,7,7},{15,50,31}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2012,7,7},{15,50,31}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Startup_err: Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • Router slowing my connection?

    - by Roberto
    I have a Linksys WRT54G and I pay for a 12Mbps connection. I've been testing my connection using speedtest.net for many days and always get 8Mbps. I called the support and they told me to bypass the router and test. I did it and got 16Mbps (much more than I pay for), so I thought "this guy just changed my speed so can he blame my router", and he blamed it. But to my surprise, everytime I bypass the router I get 16Mbps and when I use the router I get 8Mbps. Is this guy trolling me somehow (configuring the VOIP-modem-stuff to different profiles depending o the MAC address connecting to it) or is my router a POS? How can I find out? I don't know what's the thing the router connects to, it's a kind of VOIP adapter; the link is this one, but unfortunately I don't think you'll understand because it's in Portuguese. I know they can remotely connect to it, that's the origin of my conspiracy theory :) I just tested wired to the router and got 10Mbps (and still 8Mbps on wifi and 16Mbps without router) O_o I'm 5cm away from my router, so no obstacles to interfere, right? ------ UPDATE ------- It's a WRT54G V8, I'm using firmware v8.00.7 (will install 8.00.8 tomorrow, but I saw that it's only a minor fix to UPnP denial of service security vulnerability). Results: IPerf LAN-LAN: 80Mbps IPerf LAN-WLAN: 19Mbps (therefore we can ignore wireless issues/settings) I wasn't able to make the (W)LAN-WAN NAT-enabled test with IPerf, I get a connection refused error. I'm not sure if did it right: ran in server mode, configured router to forward that port to my IP and tried to connect to my internet IP that got from this site. I don't think there is a way to disable NAT using this firmware. Question: Let's suppose it's an underpowered hardware issue. Is it right to assume that custom firmwares could resolve the issue because they are possibly better implemented and would make better use of the router resources? I couldn't find any references pointing to wired performance improvements with the use of custom firmware.

    Read the article

  • SCVMM 2008 R2 problems migrating VM from VS2005 to Hyper-V host

    - by Scott Ivey
    I have System Center Virtual Machine Manager 2008 R2 installed, and have a Hyper-V R2 host and a Virtual Server 2005 host. I'm trying to migrate my machines from the VS2005 host to the Hyper-V host, and keep getting the following error... VMM is unable to complete the requested file transfer. The connection to the HTTP server myserver.mydomain.local could not be established. (Unknown error (0x80072efd)) Recommended Action Ensure that the HTTP service and/or the agent on the machine myserver.mydomain.local are installed and running and that a firewall is not blocking HTTPS traffic. (Note - migrations between Hyper-V hosts managed by the VMM server work fine - my problem is just going from VS2005-Hyper-V hosts) I have no firewalls turned on on either of the servers, and no firewalls in the middle. I've looked all over for answers to this problem, and am getting nowhere. All the articles I find when searching are talking about either V2V or P2V - and i'm just trying to do a straight migrate VM. I've tried rebooting the boxes, changing the BITS SSL port number, restarting services, triple-checking firewalls, etc. Does anyone have any good suggestions as to how I can resolve this problem?

    Read the article

  • How to get html/css/jpg pages server by both apache & tomcat with mod_jk

    - by user53864
    I've apache2 and tomcat6 both running on port 80 with mod_jk setup on ubutnu servers. I had to setup an error document 503 ErrorDocument 503 /maintenance.html in the apache configuration and somehow I managed to get it work and the error page is server by apache when tomcat is stopped. Developers created a good looking error page(an html page which calls css and jpg) and I'm asked to get this page served by apache when tomcat is down. When I tried with JkUnMount /*.css in the virtual hosting, the actual tomcat jsp pages didn't work properly(lost the format) as the tomcat applications uses jsp, css, js, jpg and so on. I'm trying if it is possible to get .css and .jpg served by both apache and tomcat so that when the tomcat is down I'll get css and jpg serverd by apache and the proper error document is served. Anyone has any technique? Here is my apache2 configuration: vim /etc/apache2/apache2.conf Alias / /var/www/ ErrorDocument 503 /maintenance.html ErrorDocument 404 /maintenance.html JkMount / myworker JkMount /* myworker JkMount /*.jsp myworker JkUnMount /*.html myworker <VirtualHost *:80> ServerName station1.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps1 JkMount /* myworker JkUnMount /*.html myworker </VirtualHost> <VirtualHost *:80> ServerName station2.mydomain.com DocumentRoot /usr/share/tomcat/webapps/myapps2 JkMount /* myworker JkMount /*.html myworker </VirtualHost>

    Read the article

  • How do multiple displays work on a AMD 785G / ATI HD 4200 motherboard?

    - by aireq
    I just ordered a ASUS M4A785TD-V EVO which has the AMD 785G chipset and HD4200 integrated graphic. The board has VGA, DVI, and HDMI outputs. I'm wondering how many outputs I can run at once, and from what connectors? My guess is that I can only use the VGA, and either the DVI or the HDMI in a dual setup. But not the HDMI and the DVI at the same time. Is this correct? If I have devices plugged into both the HDMI and the DVI ports is there a way to choose between which port I want to use? I have a dual 19" monitor setup, as well as a LCD TV. I'd like to run the VGA and the DVI into my two monitors, and then the HDMI to my TV. Then when I want to watch something on the TV I'd like to be able to switch over from the DVI to the HDMI. Is this possible with out crawling under my desk and unplugging/plugging things in? Update I found the following in the manual off ASUS's website, which confirms my original suspicion that HDMI and DVI can't be used at the same time. But I'd still like to know if it's possible through software to switch between using the HDMI and DVI.

    Read the article

  • 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • NRPE unable to read output, but why?

    - by ticktockhouse
    I have this problem with NRPE, all the stuff I've found so far on the net seems to point me at things I've already tried. # /usr/local/nagios/plugins/check_nrpe -H nrpeclient gives NRPE v2.12 as expected. Running the command by hand (as defined in nrpe.cfg on "nrpeclient", gives the expected response nrpe.cfg: command[check_openmanage]=/usr/lib/nagios/plugins/additional/check_openmanage -s -e -b ctrl_driver=0 bat_charge "Expected response" But if I try to run the command from the Nagios server I get the following: # /usr/local/nagios/plugins/check_nrpe -H comxps -c check_openmanage NRPE: Unable to read output Can anyone think of anywhere else I might have made a mistake with this? I've done the same thing on multiple other servers with no problem. The only difference I can think of with this is that this box is RHEL 5 based, whereas the others are RHEL 4 based. Those two bits above that I've tested are the what most people seem to suggest when people have had this problem. I should mention that I get a weird error in the logs when I restart nrpe: nrpe[14534]: Unable to open config file '/usr/local/nagios/etc/nrpe.cfg' for reading nrpe[14534]: Continuing with errors... nrpe[14535]: Starting up daemon nrpe[14535]: Warning: Daemon is configured to accept command arguments from clients! nrpe[14535]: Listening for connections on port 5666 nrpe[14535]: Allowing connections from: bodbck,combck,nam-bck Even though, it's plainly reading that /usr/local/nagios/etc/nrpe.cfg file to get the stuff it's talking about further down..

    Read the article

  • Tomcat 6 Windows Server 64 Redirect Connector Fails

    - by Rafe
    So is there some problem with running the Tomcat connectors under a 64 bit windows OS? Here's my configuration: Windows Server 2003 64 bit Intel Xeon Tomcat 6.0.26 JVM 1.6.0 (64bit) ISAPI Redirect Connector 1.2.30.0 (64 bit) Calling the IP address of the site with :8080 brings up the tomcat page so I know that's running and the examples all work so its obviously not having a problem with the JVM. Calling the site ip on port 80 however gives me error 324 - looking at the application log on windows shows "Could not load all ISAPI filters for site/service. Therefore startup aborted". The ISAPI filter page under the web site properties shows the status of this filter to be down with a red arrow. The ISAPI filter name is jakarta and there is a corresponding virtual directory set up in the root of the site pointing to the same directory as the filter. The jakarta web service extension is also pointing to the required dll (c:\program files\apache software foundation\jakarta isapi redirector\bin\isapi_redirect.dll). Incidentally, this same problem occurs when trying to use Tomcat 5.5. I've also tried swapping out various redirect versions. It's really odd because I got it to work once with a version of the redirector that came with Plesk but I've since uninstalled everything to do with plesk and even trying to use the plesk-compiled dll doesn't work now. I am pulling my hair out on this, any ideas?

    Read the article

  • Apache, Tomcat 5 and problem with HTTP basic auth

    - by Juha Syrjälä
    I have setup a Tomcat with a webapp that uses http basic auth in some of its URLs. There is a Apache server in front of the Tomcat. I have setup Apache as a proxy like this (all traffic should go directly to tomcat): /etc/httpd/conf.d/proxy_ajp.conf: LoadModule proxy_ajp_module modules/mod_proxy_ajp.so ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ There is a webapp installed to root of Tomcat (ROOT.war), so I should be able to use http://localhost/ to access my webapp. But it is not working with http basic auth. The problem is that everything works until I try to access URL that are protected by the HTTP basic auth. URLs without authentication work just fine. When accessing this url via apache I am getting an error message from Apache. If I access the same URL directly from tomcat, everything works just fine. I am getting this to Apache error log: [Wed Sep 01 21:34:01 2010] [error] proxy: dialog to [::1]:8009 (localhost) failed access log looks like this: ::1 - - [01/Sep/2010:21:34:01 +0300] "GET /protected_path/ HTTP/1.0" 503 360 "-" "w3m/0.5.2" I am using: Fedora release 13 (Goddard) httpd-2.2.16-1.fc13.x86_64 tomcat5-5.5.27-7.4.fc12.noarch The basic auth is implemented in the webapp (not in Apache or Tomcat). The webapp is actually implemented in Scala/Lift, but that shouldn't matter. The auth works if I access the tomcat directly. Error message that I am getting from Apache. It is curious that the title is Unauthorized and not Internal error: Unauthorized The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.16 (Fedora) Server at my.server.name.com Port 80 It could be that Apache is seeing a some thing else than 200 OK response and thinks that it is an error when it actually should pass the received 401 Unauthorized response directly to browser. If this is the problem, how to fix it?

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • nginx + wordpress in /wordpress subdir

    - by nkr1pt
    I installed nginx and would like to setup wordpress as a final step. I followed many howtos but am unable to get it working. The setup is fairly straightforward, the root dir of the webserver is /data/Sites/nkr1pt.homelinux.net. In that root dir I created a symlink to the wordpress folder in /usr/local/wordpress, so in fact all wordpress files can be accessed at /data/Sites/nkr1pt.homelinux.net/wordpress. Permissions are ok. The plan is to get wordpress working at http://sirius/wordpress, the server's name is sirius. spawn-fcgi is running and listening on port 7777. Here you can see the relevant config: server { listen 80; listen 8080; server_name sirius; root /data/Sites/nkr1pt.homelinux.net; passenger_enabled on; passenger_base_uri /redmine; #charset koi8-r; #access_log logs/access.log main; location ^~ /data { root /data/Sites/nkr1pt.homelinux.net; autoindex on; auth_basic "Restricted"; auth_basic_user_file htpasswd; } location ^~ /dump { root /data/Sites/nkr1pt.homelinux.net; autoindex on; } location ^~ /wordpress { try_files $uri $uri/ /wordpress/index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:7777 location ~ \.php$ { #fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_pass localhost:7777; #fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; #index index.php; } please note that redmine, and the locations dump and data are working perfectly, it is only wordpress that I cannot get to work. Can you please help me to the correct wordpress configuration in nginx? All help is much appreciated!

    Read the article

  • When I log on to my company desktop, I log on to a domain. How is this domain name installed?

    - by learnerforever
    Hi, When I have to work on my machine in company, I have noticed that I log on to a domain (named on the basis of company name) and not really on that computer. From, what I understand, this has a few advantages, the primary being that I just need one password for the domain and can work through any of the machines in company. My questions are : What software on desktop/network have to be installed so that the desktop recognizes and gives me option of logging into a domain. I would guess that a software can be installed on desktop, and there we can configure the IP address of domain server of company and port number, which handles authentication. Is this correct? This takes me to another question that how are softwares installed on end machines in a company. Going to each machine physically and installing looks very unweildy from administrator point of view. An obvious solution would be to install softwares (and updates) over network. My question on this are: What protocols,keywords come into picture when administrator installs OS,softwares,updates from his administrator machine to end machine through network. Thanks,

    Read the article

  • 2 Server FC SAN Configuration

    - by BSte
    I have 2 identical servers: -48GB Ram -8GigE NIC's -2FC NIC's -2x72GB RAID1 Hard Drives -Server 2008R2 Host I also Have a Fibre Channel SAN: -16x146GB RAID10 Hard Drives -2xDual-port FC Controllers (Controller A and B both have ports 1 and 2) -Server 1 has Fiber to Ports A1 and B1 -Server 2 has Fiber to Ports A2 and B2 -I kept the default config with 1 Virtual Disk and 1 Volume -The default mappings show ports A1,A2,B1,B2 on LUN 0 with read-write My goal is: -2xVM's with IIS and Guest Level Failover -2xVM's with SQL 2008 Enterprise using a Single DB and Guest Level Failover -1xVM that is an application server, preferable with Host Failover. From what I read, this will also need AD for clustering to work. -I need at least 1 VM always running for IIS and the SQLDB. This includes hardware failover and application (ie: reboot a VM for Critical updates) I was told I could install the VM's and run them from the SAN, and this is what I've tried: Installed MPIO and HyperV on Server1 and Server 2 Added the SAN as Disk E: on both servers, made it GPT and formatted NTFS Configured HyperV on both server to store use E:\VD and E:\VHD On server1, I was able to install 3 VM's on the SAN and all worked well. On server2, I would start installing the other 2 VM's, but always at some point the VM's would get a corrupt .VHD message (either server). Everything I found about the message typically related to antivirus, so I removed all antivirus on both Host servers (now only running 2008R2). I reformatted drive E: (SAN), recreated the VHD and VD directories, installed 3 VM's on Server 1, and then had the same issue when installing VM's on Server2. Obviously something is wrong, but I'm not certain what exactly. My questions: 1) Are my goals possible with this hardware setup? -I've read 2008R2 supports FC SAN's, but a lot of articles seem to only give examples with iSCSCI setups 2) What would be the suggested route on setting up the SAN (disks,volumes,LUN's)? I've worked with HyperV on a single machine before and never had issues. Actual experience working on SAN's and clustering is new to me. Any suggestions or recommendations to get me in the right direction would be much appreciated.

    Read the article

  • Looking for cheap Wireless router, with USB for attached USB disk drive

    - by geoffc
    I have a 802.11b router at home (DLink DI-614+ (B rev)) and it is working perfectly well for me. I want to replace it though, since it is out of updates, and now several years old and heck, I want a new toy to configure! I was trying to decide what to get. I could care less about 802.11g or n support, since B is fast enough, but every device in my house is now B/G, so G would be fine for me. N buys me little to nothing. (Small enough house that range is a non-issue). The features I realized I want are a USB port for sharing a USB hard drive. I would like to have a central device I could store files on. I do not want to waste the power of an always running PC to do this, so a router seems like the place to go. I would love it, if it could support Vonage VOIP as well, then I could ditch a power brick from a second device (I have the small DLink Vonage VOIP box). All the current examples of this router (with USB drive, yet to find one with VOIP too!) are in the $100+ range, and N and silly, when a B/G is in the $30 range around here.

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

  • OpenVPN via DD-WRT

    - by user140491
    I am using DD-WRT with my Buffalo G300NH. I notice in my log files: Wed Oct 10 01:08:25 2012 us=343000 Cannot open /tmp/openvpn/dh.pem for DH parameters: error:02001003:system library:fopen:No such process: error:2006D080:BIO routines:BIO_new_file:no such file I have looked at other answers regarding this error. I have tried to no avail. 755 are chmod rights to /tmp/openvpn. At this point, I can not connect outside my LAN via OpenVPN. My server config looks like this: #mode server #tls-server push "route 192.168.11.1 255.255.255.0" push "dhcp-option DNS 10.8.0.1" server 10.8.0.0 255.255.255.0 port 1194 proto udp dev tun0 ifconfig 10.8.0.1 10.8.0.2 #secret /tmp/static.key ca /tmp/openvpn/ca.crt cert /tmp/openvpn/cert.pem key /tmp/openvpn/key.pem dh /tmp/openvpn/dh.pem keepalive 10 120 comp-lzo persist-key persist-tun verb 5 management localhost 5001 Can someone, knowledgeable, of this error kindly help? i have been going on several days, trying to sort it out. I like all nighters though!!

    Read the article

  • How to write files in specific order?

    - by Bernie
    Okay, here's a weird problem -- My wife just bought a 2014 Nissan Altima. So, I took her iTunes library and converted the .m4a files to .mp3, since the car audio system only supports .mp3 and .wma. So far so good. Then I copied the files to a DOS FAT-32 formatted USB thumb drive, and connected the drive to the car's USB port, only to find all of the tracks were out of sequence. All tracks begin with a two digit numeric prefix, i.e., 01, 02, 03, etc. So you would think they would be in order. So I called Nissan Connect support and the rep told me that there is a known problem with reading files in the correct order. He said the files are read in the same order they are written. So, I manually copied a few albums with the tracks in a predetermined order, and sure enough he was correct. So I copied about 6 albums for testing, then changed to the top level directory and did a "find . music.txt". Then I passed this file to rsync like this: rsync -av --files-from=music.txt . ../Marys\ Music\ Sequenced/ The files looked like they were copied in order, but when I listed the files in order of modified time, they were in the same sequence as the original files: ../Marys Music Sequenced/Air Supply/Air Supply Greatest Hits ls -1rt 01 Lost In Love.mp3 04 Every Woman In The World.mp3 03 Chances.mp3 02 All Out Of Love.mp3 06 Here I Am (Just When I Thought I Was Over You).mp3 05 The One That You Love.mp3 08 I Want To Give It All.mp3 07 Sweet Dreams.mp3 11 Young Love.mp3 So the question is, how can I copy files listed in a file named music.txt, and copy them to a destination, and ensure the modification times are in the same sequence as the files are listed?

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Vista ICS issue

    - by Bill Grey
    I have a strange problem with Internet Connection Sharing on a laptop running Vista Business. This laptop is connected to the internet via the ethernet port, which goes to an ADSL modem. it is automatically assigned the IP address 192.168.1.50, and the modem/gateway is 192.168.1.1 My friends laptop is running Vista Home. Previously, I would create an ad hoc wireless network, enable ICS, and everything would be perfect. My friend would have internet access via this. However, something has now mysteriously broken. If I enable ICS on the wireless connection, it resets my Local Area Connection, assigning it the manual IP address of 192.168.0.1, which means my connection to the internet is destroyed. Both wireless adapters on each network are assigned auto configuration addresses, in the 168. range. They can see each other fine, but my friends laptop cannot access the internet via mine, even after I have restored the Local Area Connection settings. I understand the computer with ICS enabled must have the IP of 192.168.0.1, but previously, before whatever went wrong, my wireless adapter would be 192.168.0.1 and my friends computer would get an IP via DHCP. I have also tried setting static IP address and making a bridge, none of which works. How can I fix this problem, and prevent enabling ICS from touching my Local Area Connection? Both machines have no firewall, have appropriate settings etc...

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

< Previous Page | 404 405 406 407 408 409 410 411 412 413 414 415  | Next Page >