Search Results

Search found 42646 results on 1706 pages for 'vbox question'.

Page 349/1706 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Am I able to forward traffic from an external subdomain to a specific local host?

    - by George Bowman
    I apologise in advance if the question doesn't make sense, please let me know. I've got a small LAN (~10 Virtual Servers) using Win Server 2008 as a DNS server. This is behind a smoothwall express 3.0 firewall with ports forwarded for specific services. I have a domain (123-reg) with the NS's that of afraid.org (DynamicDNS) and subdomains pointed to my (Dynamic) IP address e.g. subdomain1.example.com - 123.456.789.101. I think that adequately explains my set up. My question is, am I able to have subdomains e.g. subdomain1.example.com only point to a specific local host? Like so: subdomain1.example.com:80 - firewall(external facing) - server1.example.com:80 subdomain2.example.com:80 - firewall(external facing) - server2.example.com:80 I don't actually necessarily want to use port 80, otherwise I would just use VirtualHosts on apache, it is just an example port. Currently I can use either subdomain1.example.com OR subdomain2.example.com and they will both point to server1.example.com:80 I do not have to stay using Win Server 2008 for DNS, I am more than happy to move over to BIND if needs be, it was just easier to use Win Server 2008's DNS. I do not know if this is even possible, I have a feeling it isn't as I've only got one external IP address but any information is useful!

    Read the article

  • DRBD stacked resources: recovering from failure

    - by Marcus Downing
    We're running a stacked four-node DRBD setup like this: A --> B | | v v C D This means three DRBD resources running across these four servers. Servers A and B are Xen hosts running VMs, while servers C and D are for backups. A is in the same datacentre as C. From server A to server C, in the first datacentre, using protocol B From server B to server D, in the second datacentre, using protocol B From server A to server B, different datacentres, stacked resource using protocol A First question: booting a stacked resource We haven't got any vital data running on this setup yet - we're still making sure it works first. This means simulating power cuts, network outages etc and seeing what steps we need to recover. When we pull the power out of server A, both resources go down; it attempts to bring them back up at next boot. However, it only succeeds at bringing up the lower-level resource, A-C. The stacked resource A-B doesn't even try to connect, presumably because it can't find the device until it's a connected primary on the lower level. So if anything goes wrong we need to manually log in and bring that resource up, then start the virtual machine on top of it. Second question: setting the primary of a stacked resource Our lower-level resources are configured so that the right one is considered primary: resource test-AC { on A { ... } on C { ... } startup { become-primary-on A; } } But I don't see any way to do the same with a stacked resource, as the following isn't a valid config: resource test-AB { stacked-on-top-of test-AC { ... } stacked-on-top-of test-BD { ... } startup { become-primary-on test-AC; } } This too means that recovering from a failure requires manual intervention. Is there no way to set the automatic primary for a stacked resource?

    Read the article

  • Imagemagick convert with resample option

    - by coneybeare
    I am creating thumbnails from much larger images and have been using this command successfully for some time: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip OUTFILE I also do some other processing that is not relevant to the question. I realized that this does not adjust the resolution at all, so if I use a 300dpi image, it ends up displaying really small on some devices. I want to resample it to 72x72 so I have been trying with this command: convert FILE -resize "64x" -crop "64x64+0+16" +repage -strip -resample 72x72 OUTFILE And expected the 64x64 image at 300dpi to be resampled to a 64x64 image at 72dpi, but instead, I am getting a very funny size and density. Here is "identify" output for the original and post-processed file WITHOUT the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 64x64 64x64+0+0 8-bit DirectClass 7.46KiB 0.000u 0:00.000 Resolution: 118.11x118.11 And here is the "identify output for the command WITH the resample: coneybeare $ convert "aa.jpg" -crop "64x64+0+16" +repage -strip -resample 72x72 "aa.png" coneybeare $ for image in `find . -type f`; do identify $image; identify -verbose $image | egrep "^ Resolution"; done ./aa.jpg JPEG 1130x1695 1130x1695+0+0 8-bit DirectClass 1.492MiB 0.000u 0:00.000 Resolution: 300x300 ./aa.png PNG 15x15 15x15+0+0 8-bit DirectClass 901b 0.000u 0:00.000 Resolution: 28.34x28.34 So, the question is: What am I doing wrong and how can I fix it so the end result is a 64x64 cropped thumbnail image at 72dpi?

    Read the article

  • What is the max supported number of SATA devices (using cable adapters) on a Dell SAS 6/iR adapter?

    - by Zac B
    I've got a Dell SAS 6/iR PCI-E adapter. I don't have a multiplier backplane. I'm planning on connecting SATA (non SAS) drives. If I buy cable adapters only (ones that split a SAS connector on the card to a certain number of SATA cables), how many drives can I connect to this card? The way I see it, there are two limitations: a limitation imposed by the theoretical max number of devices supported on the card (which I've dug through the specs to find, but haven't seen yet), and a limitation imposed by the number of SAS plugs on the card multiplied by the number of SATA cables that come out of the highest-multiplying splitter I can buy. The answer to my question would be the minimum of those two limitations. I've seen 4x SATA coming out of some splitters; are there any that have more? Alternatively, if this is an RTFM question, does anyone have a good link to a "this is how SAS works, this is how you figure out the max number of devices, and this is how the concepts of 'ports', 'lanes', 'endpoint devices', and 'connectors' all relate in SAS-land" document? I've looked around on the Dell docs, but haven't found anything that explains this to someone at my level of understanding of SAN/enterprise storage technologies. Cheers!

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Two hosts on same subnet can't see each other

    - by Joey Hewitt
    I've got two routers with two separate public IP addresses on the same subnet, but I can't get them to talk to each other. Both are connected to the internet (ISP-provided gateway) via Ethernet ports provided by the landlord, but I don't have access to or knowledge of how those are physically connected or the protocols used to get back to the ISP. I can ping either from the outside, but they can't ping each other. Traceroutes in and out look the same, and they receive the same gateway over DHCP. I can ping other IPs on the subnet, so I assume this is not any sort of intentional isolation for security/privacy. Since I'm in a setup where my landlord provides internet and we don't have contact with the ISP, I can't really ask the ISP for help (doubt the landlord would know much either.) The situation is similar to the diagram at this question, but instead of the two servers, there's another router coming off the (presumed) switch, and I don't have access to the switch. I've tried giving them static routes to each other with the ISP internet gateway as the gateway, but that's not working. One is a Linksys WRT54GL running DD-WRT, the other is a Netgear WGR614v7, although I could get something more capable if necessary. I'd like to keep them each connected directly to the ISP on their WAN ports, but I can have an ethernet cable between them if necessary - I'm wondering if there's a way without that, and if there isn't, I'd appreciate advice on how to get that working. Sorry this is so nitpicky; there are reasons for all the constraints, but they don't apply to the real question, so I left them out. ;) Thank you!

    Read the article

  • Pages load in brower fine, but 404 not found reported for the page during the GET on all pages except index

    - by user885983
    I believe this question is more suited to serverfault (please correct me if not). This issue appears very similar to this question (except there are no 301 Moved Permanently for any pages). The domain is yorkshirebadges.co.uk. For example, loading yorkshirebadges.co.uk or yorkshirebadges.co.uk/index.php reports no 404s during network inspection. But every other page (/contact.php, /products.php) report a not found. Mod_rewrite is being used on the site, I checked this out but didn't see any obvious errors. It's included below for reference: RewriteEngine on RewriteRule ^store/material/([^/\.]+)/price/?([^/\.]+)?$ products.php?prodType=$1&price=$2 RewriteRule ^store/price/?([^/\.]+)?$ products.php?price=$1; RewriteRule ^store/material/?([^/\.]+)?$ products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?$ products.php?prodCat=$1 RewriteRule ^store/([^/\.]+)/price/([^/\.]+)$ products.php?prodCat=$1&price=$2 RewriteRule ^store/Type/?([^/\.]+) products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?([^/\.]+)?$ view-product-details.php?cat=$1&prodName=$2 RewriteRule ^store/([^/\.]+)/material/?([^/\.]+)?$ products.php?prodCat=$1&prodType=$2 RewriteRule analytics http://www.google.com/analytics <IfModule mod_suphp.c> suPHP_ConfigPath /home/yorkshir <Files php.ini> order allow,deny deny from all </Files> </IfModule> Chrome Network Inspection (and firebug on firefox) report 404s on all pages except the index, the server is apache2. Really scratching my head on this one!

    Read the article

  • How can I implement ansible with per-host passwords, securely?

    - by supervacuo
    I would like to use ansible to manage a group of existing servers. I have created an ansible_hosts file, and tested successfully (with the -K option) with commands that only target a single host ansible -i ansible_hosts host1 --sudo -K # + commands ... My problem now is that the user passwords on each host are different, but I can't find a way of handling this in Ansible. Using -K, I am only prompted for a single sudo password up-front, which then seems to be tried for all subsequent hosts without prompting: host1 | ... host2 | FAILED => Incorrect sudo password host3 | FAILED => Incorrect sudo password host4 | FAILED => Incorrect sudo password host5 | FAILED => Incorrect sudo password Research so far: a StackOverflow question with one incorrect answer ("use -K") and one response by the author saying "Found out I needed passwordless sudo" the Ansible docs, which say "Use of passwordless sudo makes things easier to automate, but it’s not required." (emphasis mine) this security StackExchange question which takes it as read that NOPASSWD is required article "Scalable and Understandable Provisioning..." which says: "running sudo may require typing a password, which is a sure way of blocking Ansible forever. A simple fix is to run visudo on the target host, and make sure that the user Ansible will use to login does not have to type a password" article "Basic Ansible Playbooks", which says "Ansible could log into the target server as root and avoid the need for sudo, or let the ansible user have sudo without a password, but the thought of doing either makes my spleen threaten to leap up my gullet and block my windpipe, so I don’t" My thoughts exactly, but then how to extend beyond a single server? ansible issue #1227, "Ansible should ask for sudo password for all users in a playbook", which was closed a year ago by mpdehaan with the comment "Haven't seen much demand for this, I think most people are sudoing from only one user account or using keys most of the time." So... how are people using Ansible in situations like these? Setting NOPASSWD in /etc/sudoers, reusing password across hosts or enabling root SSH login all seem rather drastic reductions in security.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

  • How to calibrate ASUS k52f battery on Ubuntu?

    - by cutalion
    I'm not sure if the problem is in software or my battery is dying, I'll move my question to another forum if it's not a SO question I have a problem with the battery on my laptop - ASUS K52F. It shows incorrect information about capacity. When I unplug the charger it can work some time, but then it will power off without any warnings. Sometimes it will power off right after I unplug the charger. Here is some info I could get: > uname -a Linux alligator 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > acpi -i Battery 0: Charging, 99%, 18:25:15 until charged Battery 0: design capacity 5235 mAh, last full capacity 69964 mAh = 100% > cat /sys/class/power_supply/BAT0/uevent POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Charging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=9246000 POWER_SUPPLY_POWER_NOW=176000 POWER_SUPPLY_ENERGY_FULL_DESIGN=**48400000** POWER_SUPPLY_ENERGY_FULL=**646822000** POWER_SUPPLY_ENERGY_NOW=**643588000** POWER_SUPPLY_MODEL_NAME=K52F-44 POWER_SUPPLY_MANUFACTURER=ASUSTek POWER_SUPPLY_SERIAL_NUMBER= I noticed, that POWER_SUPPLY_ENERGY_NOW and POWER_SUPPLY_ENERGY_FULL are greater than POWER_SUPPLY_ENERGY_FULL_DESIGN. I don't think it's ok :) I can run any additional commands.

    Read the article

  • Linux Live CD only works when Windows is in Legacy mode?

    - by Vee
    I have asked a similar question before and no one was able to help me but I think it was because I wasn't phrasing it properly. This is a better restatement of the question. I have Windows 8 and Linux Mint dual booted on my pc. When I tried to boot the Linux from a CD ROM only, it would give me the following error: error: failure reading sector 0x0 from 'hd1' error: you need to load the kernel first. Press any key to continue... The Linux Mint works fine but otherwise, but it gives this error when I try to boot from CD. The boot Linux from CD only worked when I changed the Windows to Legacy mode in the BIOS settings. When I changed it back to UEFI, it would give the same error. Why is this? How can I fix it? I am somewhat new so is there anything else I should know about all of this? NOTE: I changed the Linux into UEFI mode using boot-repair but that still did not solve the problem when I tried to boot from CD ROM.

    Read the article

  • How to get data out of a Maxtor Shared Storage II that fails to boot?

    - by Jonik
    I've got a Maxtor Shared Storage II (RAID1 mode) which has developed some hardware failure, apparently: it fails to boot properly and is unreachable via network. When powering it on, it keeps making clunking/chirping disk noise and then sort of resets itself (with a flash of orange light in the usually-green LEDs); it then repeats this as if stuck in a loop. In fact, even the power button does nothing now – the only way I can affect the device at all is to plug in or pull out the power cord! (To be clear, I've come to regard this piece of garbage (which cost about 460 €) as my worst tech purchase ever. Even before this failure I had encountered many annoyances about the drive: 1) the software to manage it is rather crappy; 2) it is way noisier that what this type of device should be; 3) when your Mac comes out of sleep, Maxtor's "EasyManage" cannot re-mount the drive automatically.) Anyway, the question at hand is how to get my data out of it? As a very concrete first step, is there a way to open this thing without breaking the plastic casing into pieces? It is far from obvious to me how to get beyond this stage; it opens a little from one end but not from the other. If I somehow got the disks out, I could try mounting the disk(s) on one of the Macs or Linux boxes I have available (although I don't know yet if I'd need some adapters for that). (NB: for the purposes of this question, never mind any warranty or replacement issues – that's secondary to recovering the data.)

    Read the article

  • RouterLess, house-wired network using multiple powerline adapters

    - by Cliff Arnell
    related to the 'old days' of one ethernet cable tapped with Ts for each monitor.... my question might be very simple... or not. I have an over-the-air internet provider with a wire dish with a powered transceiver and cat5 cable out of the providers supplied modem. I'm presently connecting the output of the modem into my wireless router which sends the internet signal all over the house. Standard stuff, I believe. My Question. Can I just connect the output of the modem into 1 powerline adapter and tie all my equipment such as computer, printer, laptop, Tivo recorder, etc. into 1-each local powerline adapters located near each devices resulting in a 'house-wired' network and no router? I'm bothered by the idea that my over-the-air provider might be using something in my router to establish and keep my IP connection alive. I did have to configure the router for my IP, a router which, in my proposed scenario, would no longer exist. Thank you for your help.

    Read the article

  • Map a URL bought with Dreamhost to Amazon EC2 (AWS)

    - by Edan Maor
    I have several URLs I purchased through Dreamhost. I'm starting to use Amazon's AWS, and I'd like to map the URLs to Amazon. This is something of a silly question, and I've already done the same thing several times to other services (mapping from Dreamhost to WebFaction). But for some reason when I tried to find the proper way to do the same mapping to Amazon, I find a lot of detailed writing talking about whether I should be using CNAME or A records, etc. So I wanted to ask in the simplest possible terms and hopefully get a simple, concrete answer: I bought a URL from Dreamhost, I have an EC2 server running on AWS (to which I already mapped an Elastic IP address). How do I make the URL map to AWS? And if there are several options, which one should I effectively be using? P.S. Meta-question - why are things so much more difficult with AWS? When I search Google for "Move from Dreamhost to WebFaction, I get very simple answers on how to do the mapping. In what way is AWS different?

    Read the article

  • Does a VPN requirement kill the concept of having a Web Application in the Cloud?

    - by Christian
    Recently I posted a question in SO, but so far I got no answers. I wonder if I'm asking the wrong question. This is the problem: We need to design an application which offers a public http web service, but at the same time it must consume some services through a VPN connection from other existing company. There is no other alternative but to use a VPN connection to access those services. We want to host our application in some cloud infrastructure like Heroku or Amazon EC2. But there is no direct way to access the VPN services of the other company from there. The solution I'm thinking, but I don't like is to have a different server to expose the services from that VPN. But this will require the setup of another server which I prefer to avoid. In the case this is the solution, can I use an Amazon EC2 instance to connect to a VPN? This is what I was thinking, is it correct? I don't have experience using VPNs, tunnels or those kind of networking stuff. I will really appreciate if you can propose me an alternative solution, or just give me a comment.

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Printing special / extended characters on web page - Firefox or printer issue?

    - by edmicman
    My dad brought this to my attention and I'm looking into it, but thought I'd post here and see if anyone has any ideas... He's running Win7, using Firefox and printing to a wireless connected Brother HL-2170W printer. He's got a web page (http://cornandsoybeandigest.com/inputs/fertilizer/applying-nitrogen-after-planting-0512/index.html) that has extended/special characters in it - the funny "a" in Fernandez. The characters show correctly in Firefox on the page. He printed it, and the extended "a" printed as a diamond with a question mark. He says it shows that way in the print preview, too. I pulled up the same page in Ubuntu, Firefox, and it displayed in my print preview and physically printed everything correctly. I just checked on my wife's Win7 PC in Firefox and the print preview looked correct on her system, too. We have a Brother 2040 here. Soooo, my question is, is this possibly a problem in the browser somehow, or the printer driver? I'm leaning towards the printer driver now, but I can't say I've run into this before. Is it a setting somewhere? I just installed this printer for him the other day, using the CD that came with it; I could try updating the driver from Brother's website I guess. Is there anything else I should look at or check? Thanks!

    Read the article

  • When a server IP changes, do exising TCP (e.g. http/mysql) connections remain running

    - by Luke Cousins
    We have some PHP-FPM servers and when they need a database connection, they connect to an HAProxy server which selects them a database server to use and the connection opens. When we then want to perform some maintenance on the HAProxy servers (such as config changes requiring an HAProxy restart), the process is as follows: Shutdown Keepalived on the HAProxy server Wait for the floating IP to transfer to the backup HAProxy server (also running Keepalived) Wait until HAProxy stats is reporting just one connection (us checking how many connections there are) Restart HAProxy Restart Keepalived As step 2 occurs, what will happen to the open mysql connections at that point? According to this TCP Sessions and IP Changes question the connections will be dropped. Is this really the case? If so, what, if anything, can be done to prevent this happening? Can the connection be in some way forced to use the main (non-floating) IP of the server? We also have a similar setup with two Nginx servers with Keepalived running on them and we were planning on doing the equivalent process. If we do, the same question applies - what happens to the existing http connections when the IP moves to the other server? I appreciate your help.

    Read the article

  • Http header 304 and caching?

    - by Royi Namir
    Our company uses these settings( don't ask me why) - for every request they want a new request from server. this is an intranet system which uses only IE. They defined it in : We also have windows authentication NTLM in the iis7. I have 2 questions please. Question #1) when the browser make a request ( css ) : (leave the 401 response for now - this is how ntlm works) He is requesting it with if-modified-since header. why is he adding this header ? How can I configure it ? why doesn't he use the settings from IE and try to download it each time - as I showed in the first picture ? Question #2) The response ( after ntlm negotiation) for that was : Response with Not-modified which is 304 header. and I assume its because we sent the request with the if-modified-since header. But there is a problem. He is actually tells me to download from my cache. But I told him explicitly in the IE settings - not to load from cache. Wham am I missing here ? Thanks a lot.

    Read the article

  • Equivalent of scp -l bandwidth_cap for .ssh/config?

    - by Mark Bennett
    Short form: You can limit the bandwidth the scp uses with the -l switch, you pass a number that's in kbits/sec. I'd rather set this in my .ssh/config file for certain names machines. What's the equivalent named setting for -l ? I haven't been able to find it. Followup question: Generally, not sure how to map back and forth between ssh command line options and config names, short of doing Google searches or manually comparing man pages on a case by case basis. Is there a table that directly equates the two? Longer form of first question, with context: I've started using ssh config quite a bit, especially now that I need to go through a proxy and do lots of port mappings. I even define the same machine more than once depending on what type of tunneling I need. However, when uploading a large file, it's difficult to do anything else on my machine. Even though I have more download bandwidth than up, I think that scp saturates the link so even my small requests can't reach the Internet. There's a fix for this, using the -l bandwidth command line switch for scp. scp -l 1000 bigfile.zip titan: I'd like to use this in my config instead, so I'd create an additional named entry called "titan-upload" and I'd use that as the target whenever I upload. So instead of: scp bigfile.zip titan: I'd say: scp bigfile.zip titan-upload Or even set different caps depending on where I am: scp bigfile.zip titan-upload-from-home vs. scp bigfile.zip titan-upload-from-work I'm generally on Mac and Linux.

    Read the article

  • What is the oldest hardware still in production use? How is it kept running?

    - by sleske
    In the spirit of the question What is your oldest hardware that still works?, I'd like to ask: What is the oldest hardware you know that is still in production use? And what challenges did you (or someone else) face in keeping it running (scarce documentation, no support, no spare parts available...)? Most organizations will retire / upgrade software and hardware after 5-10 years, but sometimes old software is kept running on old boxes, because it "just works". I once worked at a client site that was running a critical piece of (in-house developed) business software on a single server running HP-UX. The server was old (ca. 12-13 years), but fortunately still running without problems; however, getting spares would have been very difficult, and since software installation was undocumented, any significant system changes or even new hardware might have caused significant downtime and data loss. We eventually managed to replace it, but this is not always possible. I also read that many organizations still run decade-old mainframe hardware, particularly for highly customized systems controlling industrial machines or power plants. Which old hardware have you encountered? How did you manage these challenges? Related question: http://serverfault.com/questions/82467/should-old-servers-be-retired

    Read the article

  • Why dd finishes instantly when pipelining to cat?

    - by agsamek
    I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again?

    Read the article

  • How to connect computers to a network printer behind a router?

    - by kokbira
    General question: How to connect computers to an IP printer behind a router? Particular question: How to connect C-1 and C-2 to PRI? What? Where? [ISP] | | -> IPs:200.X.X.X/other configs:DC | [R-1] | | -> IPs:10.1.X.X locked by MAC,M:255.0.0.0,G:10.1.0.1 |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | | [PRI] IP:10.1.7.7 [R-2] IP: 10.1.0.1,MAC:A | | -> IPs:192.168.1.X,M:255.255.255.0,G:192.168.1.1 |¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯| | | [C-1] IP:192.168.1.2 [C-2] IP:192.168.1.3,MAC:A Glossary and details: ------------------------------------------------------------------------------------ - IP: IP. - IPs: Some IP range. - M: Mask. - G: Gateway. - MAC:A: A MAC address that I will not inform you :) - DC: Don't care. - ISP: Internet Service Provider (not so much details about it on that case). - R-1: A real router or some concatenated so IP range bellow that block is 10.1.X.X and above is ISP. The provided IPs are provided by MAC. As all available addresses are in use, you must clone an existing one to join with a new device (and to disconnect the cloned one). - PRI: An network printer (some people here call that IP printer). - R-2: A TP-LINK TL-WR340G, mine wireless router (since my computer does not have ethernet input, it is my ethernet-wifi adapter :), admin access, MAC address cloned from C-2 (MAC:A). I've to configure 10.0.1.1 and 10.0.1.2 as DNS addresses, other wise I cannot connect C-1 and C-2 to Internet. - C-1: My computer, a CCE XLE-425 (remember: no ethernet input), with Windows 7, admin access. - C-2: another computer with better configs than mine, MAC:A, Windows XP. Requirements: I want to print, to access Internet and to do it myself (no need to call network admin men in black people). Pay attention to MAC clones and DNS info.

    Read the article

  • How to setup port forwarding from my Webserver (apache) to my Database server (mysql)

    - by karman888
    Hello again guys, and thank you for your help so far. Here is my problem: I have two remote dedicated servers, one webserver that runs apache, and one db server that runs mysql. The apache server is visible on the internet of course, but the second server is only visible to the apache server because they are connected with LAN. I need to connect to the remote mysql server through internet from my home-pc , but only apache server is visible to my home-pc. How can i setup port-forwarding from my apache server to the mysql server so i will be able to "see" the mysql server from my home-pc? This question is a follow-up from my first question Connect to remote mysql server from my application. Problem is that Mysql server is on LAN in which you answered me and helped me a lot by telling me to do "port-forwarding". I looked over the internet, and i cant find a good how-to to do port-forwarding. I'm an experienced programmer, but have little experience on hardware and networks. I can understand though what must be done, so i just need a litle help to sort things out :) I hope you can help me guys, Thank you in advance p.s. machine that Apache is running is on CentOS, mysql server also CentOS. p.s2 webserver runs WebHostManager i dont know if that makes any difference or it can be made easily through this, i just mention it :)

    Read the article

  • Random and Selective ARP blindness in VMWare ESXi 4.1

    - by Peter Grace
    We have multiple VMWare ESX servers spread out amongst our company, doing various tasks. One particular ESXi host is exhibiting very peculiar behavior. We detect it when our monitoring system (Orion) notifies us that it can no longer ping the box. Upon jumping on the local console of the guest in question, we see that it cannot ping any new addresses that aren't already in its ARP table. At first we thought that the problem was just related to one of our guests, as the problem seemed to always happen to another guest, DevRedis. However, this afternoon the problem swapped and started happening on ApacheBox rather than DevRedis. When I have been fortunate to catch the problem, I have run tcpdump on both sides of the connection (one side being vmware, the other side being a physical webserver) and have noticed the following course of events: Guest ApacheBox sends an ARP request for the physical address of server WindowsBeast WindowsBeast tenders an ARP is-at back to the network indicating its physical mac address. ApacheBox never sees the ARP is-at response. The ESX host in question is running VMware ESXi, 4.1.0, 348481 The two guests (DevRedis and ApacheBox) are both running CentOS 6.3, however they are running two separate kernel versions ( 2.6.32-279.9.1.el6.x86_64 and 2.6.32-279.el6.x86_64 ) so I'm not entirely sure it's a CentOS problem. Does anyone have any thoughts on what might cause this? Has anyone run into it before?

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >