Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 215/304 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • RAIDs with a lot of spindles - how to safely put to use the "wasted" space

    - by kubanczyk
    I have a fairly large number of RAID arrays (server controllers as well as midrange SAN storage) that all suffer from the same problem: barely enough spindles to keep the peak I/O performance, and tons of unused disk space. I guess it's a universal issue since vendors offer the smallest drives of 300 GB capacity but the random I/O performance hasn't really grown much since the time when the smallest drives were 36 GB. One example is a database that has 300 GB and needs random performance of 3200 IOPS, so it gets 16 disks (4800 GB minus 300 GB and we have 4.5 TB wasted space). Another common example are redo logs for a OLTP database that is sensitive in terms of response time. The redo logs get their own 300 GB mirror, but take 30 GB: 270 GB wasted. What I would like to see is a systematic approach for both Linux and Windows environment. How to set up the space so sysadmin team would be reminded about the risk of hindering the performance of the main db/app? Or, even better, to be protected from that risk? The typical situation that comes to my mind is "oh, I have this very large zip file, where do I uncompress it? Umm let's see the df -h and we figure something out in no time..." I don't put emphasis on strictness of the security (sysadmins are trusted to act in good faith), but on overall simplicity of the approach. For Linux, it would be great to have a filesystem customized to cap I/O rate to a very low level - is this possible?

    Read the article

  • Does Guest WiFi on an Access Point make any sense? [migrated]

    - by Jason
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • Display errors using VMWare Player and Remote Desktop on Windows XP

    - by Tim
    I've come up against a weird display issue that I can't seem to find any "fix" for. When I first boot up my computer, everything behaves normally. If I start to use VMWare Player and/or Remote Desktop, my desktop starts having some odd video issues. The frames for some windows aren't drawn at all, if I move windows around rapidly, the area under where the window used to be isn't cleared (still shows artifacts of the content of the window), etc. In some cases, the minimize / restore / maximize buttons aren't drawn (but are click-able if you can guess where they are). I've tried the usual stuff - current drivers all around, using a single monitor, etc.. none of it seems to have any bearing. If I try to disable hardware acceleration, it tends to crash the computer. As I said earlier, it's running Windows XP, dual monitors, an NVidia en8400GS video card, asus p7p55d motherboard. Not sure what other pertinent details are needed. I would appreciate any help or suggestions!

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • Computer won't start up. Stuck on Lenovo splash screen. Help Diagnose

    - by Ace Legend
    I have some (I'm not sure exactly what model) Lenovo 21" IdeaCentre. Honestly, the computer works off and on. I have had problems with it not being able to shutdown, which I fixed. The fan seems to be constantly running, a few other problems as well. Anyway, nobody was using it when all of a sudden it switched to a blue screen. I was in the kitchen, but when I got over to the computer I read the message. It said something about bad drivers, but that is all I saw and then it restarted. However, when it got to the Lenovo Splash screen, nothing happened. I waited there for over 10 minutes, but still nothing. I tried to turn of the computer, but the only way to do it was to pull out the power cable. I then removed all USB devices and tried again. Still nothing. It also won't respond to keyboard input when I try to use enter to interrupt normal startup. My guess is some piece of hardware is damaged inside the machine. However, I have no idea what piece it is. Does anybody have any idea what could be wrong with it? Thanks.

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • Outlook 2010 IMAP account - send on behalf

    - by Master of Celebration
    So I was looking for a possibility to manage the mail distribution of online shops, newsfeeds, etc. and have a nice solution via distribution groups aka. alias addresses. In example, I register an account on eBay using "[email protected]" (where org.com is my company obviously). That address is an alias and can be managed on my on-premise mail server setting destination to somebody's mailbox independent from logging on to eBay - in case somebody else shall do the eBay-stuff, I can quick change the destination of that alias :-) So far, so good - and now to the problem: Using Microsoft Outlook 2010 and an IMAP account on our mail server, I cannot figure out how to remove that "on behalf of"-string visible in the from-field when sending a message under that [email protected] address. That's quite a pity, because especially eBay doesn't accept/forward mails not coming from the registered address.. Using other mail clients (e.g. Mozilla Thunderbird), the problem does not occur so I guess it's Outlook specific. I cannot "grant" permission to "send as", because that address is not a mailbox, but rather an alias only. Furthermore, the mail accounts are not Exchange, but IMAP! Does anybody have any other ideas to "remove" that annoying string? Consideration: We have to use Microsoft Outlook for some reason! :-)

    Read the article

  • Why does windows (file) explorer try to connect to port 80 (http) instead just using smb?

    - by Erik
    Background: On an almost freshly installed pc I get a message along the lines of : "windows cannot find some-file-server-name. Check the spelling and try again"... when trying to access any fileshare. Troubleshooting so far: pinging works. Both by ip and by name the almost identical pc next to this one can access the file server everyone else can access the file server the pc in question can not access other open fileshares but it can connect to the internet And now for what I think is the interesting part: running wireshark with ip.addr == local.ip.add.ress and ip.addr == server.ip.add.ress tells me that it tries to connect over http. the server replies but after a few messages back and forth it stops the other machine of course just uses smb I guess port 80 just means it defaults to webdav, but I haven't been able to find anything that can cause this. Googling it the closest thing I found was this http://www.techrepublic.com/article/get-vista-and-samba-to-work/6353849 but then again this was an XP pc and I wasn't able to connect to other native Windows shares (and I tried the solution anyway and it didn't work.)

    Read the article

  • HP DL380 G3 2U For Basic Web Server in 2012

    - by ryandlf
    I have an opportunity to pick up a used HP DL380 G3 2U for $100. I'm looking for a basic entry level web server that I can host a small - medium size website on and more or less learn the ins and outs of running my own web server before I bite the bullet and spend a couple grand on a server. The specs are: Dual (2) Intel Xeon 2.4GHz 400MHz 512KB Cache 4GB PC2100 ECC Registered Memory 6 x 72GB 10K U320 SCSI Hard Drives Smart Array 5i RAID Controller Redundant Power Supplies DVD/Floppy, Dual Intel GB NIC's, USB Or would I be better off spending a couple hundred bucks on something like: this new HP Seems like the only major difference is SATA and a bit of storage, but I will likely be implementing a separate storage system of some sort anyways. I guess it also wouldn't hurt to mention that I plan on running a linux server distro, so would the hardware be likely to support linux with a system that is 4 generations old? I don't mind spending a couple hundred extra dollars if its a better solution, but as mentioned previously I am simple looking for a server to learn on and probably use for a year or so while I put together a small - medium size website.

    Read the article

  • Exchange 2010 mail routing with Hub Transport in multiple sites

    - by jmreicha
    I have two separate physical sites, Site A and Site B. In site A, I have following: 2 CAS servers 2 Hub Transport servers 2 Mailbox servers 2 Edge servers In site B, I have the following: 1 CAS 1 Hub 1 Mailbox 1 Edge Currently everything is working out of site A. That is, all users are housed on mailboxes that are in site A and all inbound mail flow is pointing to site A. I would eventually like to be able to move some of the mailboxes to site B without causing a disruption for resliency and redundancy purposes but I am not quite sure how to go about setting this up or if it is even possible. So far I have created an Edge subscription in site B and am able to send emails out from test accounts set up with mailboxes on the site B Mailbox server. However, I am unable to receive incoming mail messages and am confused. So I'm thinking incoming mail messages are still being directed to site A and then they are getting stuck because there is no way to route the mail to the site B mailboxes. Is this assumption correct? I am unfamiliar with mail flow and routing so I am not really sure what I need to be looking at? Would I add the site B hub transport to the Edge subscription in site A? Or I guess more specifically, how would I go about enabling communication and mail flow between mailboxes split up on site A and B?

    Read the article

  • Same script, different behavior [migrated]

    - by Antoine_935
    I just stumbled upon an interesting bug... Still trying to figure out what is exactly happening. Maybe you can help. First, the context. I'm currently building yet another man to html converter (for some reasons I won't motivate here, but I need it). So, have a look at the screenshot below (see the link), more precisely at the outlined spots. See? On the upper shell, I have &lt ; and &gt ;, that is, escaped html. While on the shell below I have < and directly. But as you can see (or do I seriously need looking glass ?), the command man 2 semget | webmanneris the same on both sides, as is the which webmanner. The two are executed roughly at the same moment, with no modification made to the script between. [Oops, cannot post pictures just yet... Here comes the link] http://aspyct.org/media/webmanner-bug.png But the shell below is older (open about 1 hour ago). Newer shells all print out &lt ;. So my first guess was that it somehow had a cached reference to the old inode of the file, or old blocks or whatever. So I modified parts of the script, at the start and then at the end, to print different messages. And, surprise, the message shown up on both terminals. But still, same difference between &lt ; and <. I'm confused... How to explain that behavior? I'm working on a OSX 10.8 (Mountain Lion) EDIT: OK, there is one big difference: the shell below uses ruby 1.9.3, while above is 1.8.7. Is there any known difference in string handling between the two versions ?

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • How can I disable 'natural breaks' in Workrave?

    - by Pixelastic
    I've just discovered Workrave, and was trying to use it along the Pomodoro technique (5mn break every 25mn). But the concept of 'natural breaks' of Workrave seems to interfere with what I'm trying to achieve. Workrave tries to guess that I'm doing a natural break if I stop using my mouse and keyboard for longer than 5s. It then stops the work timer, and start counting time as if I was doing my break. Here is a typical example : I've configured a 5mn rest break every 25mn. I start working. 10mn later, I receive a phone call, or start talking with a colleague, or any work-related action that do not need either keyboard nor mouse. Workrave then stops counting my time as work time, and starts its rest timer. If my phone call is shorter than 5mn, then Workrave will resume its timer where it stopped it. Meaning that my time on the phone is not counted as work time, and so my break time is pushed a few minutes later than it should be. Even worse, if my phone call is longer than 5mn, then Workrave count it as a complete rest break, and when I'll resume working, it will restart its timer completly. I'm looking for either a way to disable the natural breaks, or increase the 'inactivity time' from 5s to maybe ~1mn. Or maybe an other angle to look at the natural breaks that might work with the Pomodoro technique (forced 5mn breaks every 25mn). I'm using Ubuntu 11.10.

    Read the article

  • Why is my new PC so slow at startup?

    - by rumtscho
    Bought a new PC this weekend, and it works really good. Only I have one big problem: startup time. Its BIOS needs 62 sec to load, then from Grub start to pw entering screen it's another 26 sec. I think this is a lot, because my old PC needs 34 sec for BIOS and another 8 sec to pw screen. After I enter the pw, the desktop is usable with practically no delay on both. The new PC is a core i7-930, running a Lucid Lynx 64 bit from a Intel Postville SSD (no internal HDs). The old PC is a Pentium 4 celeron (forgot the clock speed) running a Lucid Lynx 32 bit from an ATA 100 hard drive. Neither PC is overclocked. The new one has boot sequence 1.DVD ROM, 2.SSD (connected over SATA in AHCI mode), 3. removable drive. The old one boots from 1. DVD ROM, 2. HDD, 3. Floppy. Neither has a second OS installed. The new one has less software installed than the old one (I think), but the boot time difference was noticeable even before I made any installs. As far as I know, just the SSD should be enough to make a noticeable difference in boot time. I thought that having a good mainboard on the new PC as opposed to the basic office model on the old one would also mean a faster loading BIOS. If these assumptions are right, I guess I must have misconfigured something in the BIOS of the new PC. How should I configure it for a fast boot? It has an ASUS P6X58D board with an AMI BIOS, if you need the BIOS revision number I could post that too.

    Read the article

  • Interaction between two Clouds

    - by Snehal Masne
    I have setup the Cloud-A with 1 - [CLC+CC] and 2 - [NC] computers. I have another Cloud-B with same configuration using the Ubuntu Enterprise Cloud Both of them working fine individually, in the same LAN. Now if I want to add the NC of Cloud-A to CC of Cloud-B, [in case the resources of Cloud-B are exhausted] how can I make it possible ? I guess this calls for the interoperability stuff... Could you please explain what happens exactly when we ask for instance, the direct interaction happens between the client and NC or it goes through the CLC and CC ? What I want to say is, say there are multiple cloud providers. A user is subscribed to any one of them, say Cloud-A for IaaS. As the requirements are dynamic, all the resources of Cloud-A may get exhausted. There may be another Cloud-B which can provide the services but that Cloud-A can't ask the client to go for Cloud-B. So if it is possible to have some co-ordination between this two providers to share resources mutually, making client fully unaware of whats going on in the background....?

    Read the article

  • Mail Server using Postfix

    - by unknown (google)
    I have currently set up my web application on Amazon EC2 server. As a well known fact sending email from EC2 has a problem. As a cheap and long lasting solution instead of using "authsmtp" is it possible to rent a server and use it as a Mail Server? I am currently looking for cheap hosting which will give me root access so that it can be configured and used as a relayhost. I am curently using Postfix as MTA. Has any one implemented this before? I am curious about its feasibility of this solution. I guess common requirements are: 1: Dedicated IP which is not black listed. 2: Open relay( open to my Server only) Any Tips for Header configurations to keep the mails out of spam folder. This is like exactly cloning authsmtp for personal use. Any suggestions for other Mail Server software instead of Postfix? Another problem is Reverse DNS for this server. Should PTR entry be present if a server is used as a relayhost?

    Read the article

  • Verification of downloaded package with rpm

    - by moooeeeep
    I wanted to install a package on CentOS 6 via rpm (e.g., the current epel-release). EDIT: Of course I would always prefer the installation via yum but somehow I failed to get that specific package installed using this normal approach. As such, the EPEL/FAQ recommends Version 2. As I'm downloading the package through an insecure channel (http) I wanted to make sure that the integrity of the file is verified using information that is not provided with the downloaded file itself. Is this especially true for all of these approaches? I've seen various approaches to this on the internet: Version 1 rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 2 rpm -Uvh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm Version 3 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm rpm --import https://fedoraproject.org/static/0608B895.txt rpm -K epel-release-6-7.noarch.rpm rpm -i epel-release-6-7.noarch.rpm I do not know rpm very well, so I wondered how they might differ? My guess (after reading the manpage) is that the first should only be used when the package is previously not installed, the second would additionally remove previous versions of the package after installation, the first two omit some verification steps before the actual installation that are done by rpm -K. So my main questions at this point are Are my guesses correct or am I missing something? Is the rpm --import ... implicitly done for the first two approaches as well, and if not, isn't it necessary to do so after all? Are these additional checks performed by rpm -K ... any relevant? What is the best (most secure, most reliable, most maintainable, ...) way of installing packages via rpm in general?

    Read the article

  • Setting up test an dlive enviornment - how?

    - by Sean
    I am a bit new to servers and stuff so had a question. I have my development team working on my website. They are in different countries and currently they put all the work live on the test site. But the test site is open to anyone who knows the URL. It is behind a directory but this effects my QA process because i cannot use the accurate URL structures to prevent the general public from seeing it. So what I want to do it: Have my site live on the net but only for me and my team, so like an internal network. Also I will need to mirror this to my live site when i put it live. So i guess this is something like setting up a staging and live environment. So how to do it and are both environments on the same physical server or do i need to buy two servers? And if i setup a staging environment how will i access it and my team since we are all spread out so i assume we need to log into something to access it? What about the URL - do i need a different URL for the test site or can i use the same live url for the test site? I plan to get a dedicated server + CDN for my site.

    Read the article

  • File descriptor linked to socket or pipe in proc

    - by primero
    i have a question regarding the file descriptors and their linkage in the proc file system. I've observed that if i list the file descriptors of a certain process from proc ls -la /proc/1234/fd i get the following output: lr-x------ 1 root root 64 Sep 13 07:12 0 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 1 -> /dev/null l-wx------ 1 root root 64 Sep 13 07:12 2 -> /dev/null lr-x------ 1 root root 64 Sep 13 07:12 3 -> pipe:[2744159739] l-wx------ 1 root root 64 Sep 13 07:12 4 -> pipe:[2744159739] lrwx------ 1 root root 64 Sep 13 07:12 5 -> socket:[2744160313] lrwx------ 1 root root 64 Sep 13 07:12 6 -> /var/lib/log/some.log I get the meaning of a file descriptor and i understand from my example the file descriptors 0 1 2 and 6, they are tied to physical resources on my computer, and also i guess 5 is connected to some resource on the network(because of the socket), but what i don't understand is the meaning of the numbers in the brackets. Do the point to some property of the resource? Also why are some of the links broken? And lastly as long as I asked a question already :) what is pipe?

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • Windows 8 "Upgrade Offer" eligibilty when running the Consumer Preview in a VM?

    - by Dan Harris
    If I have a VM running Windows 8 Consumer/Release Preview, am I allowed to take advantage of the Windows 8 upgrade offer, and install it on that machine? I would have assumed not...as there was never a licensed version of XP SP3 through to Windows 7 installed in that VM. It was a clean installation of the Consumer Preview into a VM. My confusion comes from the notes at the bottom of the download page for the Upgrade offer which states: Offer valid from October 26, 2012 until January 31, 2013 and is for individuals and small businesses needing to upgrade up to five devices. If you are a business customer looking to upgrade more than five devices to Windows 8 Pro, contact your Microsoft partner for more information. To install Windows 8 Pro, customers must be running Windows XP SP3, Windows Vista, Windows 7, Windows 8 Consumer Preview, or Windows 8 Release Preview. I am assuming it's not possible and i'll need to purchase the System Builder edition to install within a VM? My guess is that you can use your downloaded upgrade offer only if you updated Windows 7 to the release preview, and therefore had the Windows 7 license on the machine, I used the serial number from the Microsoft Website when downloading the Release Preview, and did a clean install, so there was never a Windows 7 license on the VM. I have MSDN for development purposes, but I am looking to run in a VM for personal use as well, so my MSDN license is not valid for that particular use.

    Read the article

  • ec2 spot instance for daily processing task

    - by chaft
    I don't have much experience as a sysadmin or with amazon aws, so I hope someone can explain in simple terms or refer me to a good guide on how to achieve the below. I have a system running on ec2 and amazon rds getting data in and saving it to the db. I need to run a script once a day (at the end of the day) to process all that data and prepare a daily report. This process will take approximately an hour to run. It needs to run on a high memory instance.. From what i've read so far, I guess the best way to do it is to have a high memory spot instance run every day, set it up to execute the script on startup and and shut down when done. Is that the right way to do it? If so, how to do it? how to tell the spot instance to run every day? through a cron job on the other server or is there a better way? How to set it up to run the script on startup? through cloudinit? Any help would be appreciated. One last thing, the job is not very time sensitive as long as it runs every day.. thanks

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >