Search Results

Search found 9044 results on 362 pages for 'bad sector'.

Page 272/362 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • Freebsd jail for an small company - checklist - what shouldn't forget

    - by cajwine
    Looking for an checklist for an "small company freebsd/jail server". Having pretty common starting point: FreeBSD jail (remote/headless) for the company: public web, email, ftp server, and private (maybe in the future partially public) wiki (foswiki) 4 physical persons, (6 email addresses) + one admin - others will never use ssh) have already done usual hardening on the host side (like pf, sshguard etc). my major components are: dovecot, exim, apache22, proftpd, perl5.14. Looking for an checklist, what I shouldn't forget. My plan: openssl self-signed certificates for exim, dovecot and proftpd (wildcard keys) openssl self-signed certificate for apache (later will go for "trusted-signed" key) My questions are: is is an "good practice" having one pair of wildcard SSL-certificates for many programs? (exim, dovecot, proftpd) - or should I generate one key for each service? should I add all 4 persons as standard (unix) users, or I should go with virtual users? Asking because: have only small count of users, and it is more simple to configure everything (exim, dovecot) for local users ($HOME/Maildir), plus ability to set $HOME/.forward/vacation and etc. is here some (special) things what I should consider? (e.g. maybe, in the future we want setup our own webmail - will make this any difference?) any other recommendation? Thank you, hoping that this question fit into the http://serverfault.com/faq under the: Server and Business Workstation operating systems, hardware, software Operations, maintenance, and monitoring Looking for an checklist, but please explain why you're recommending it. See Good Subjective, Bad Subjective. related: What's your suggested mail server configuration for a FreeBSD server?

    Read the article

  • iptables -- OK, **now** am I doing it right?

    - by Agvorth
    This is a follow up to a previous question where I asked whether my iptables config is correct. CentOS 5.3 system. Intended result: block everything except ping, ssh, Apache, and SSL. Based on xenoterracide's advice and the other responses to the question (thanks guys), I created this script: # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP Now when I list the rules I get... # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 9 612 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https 0 0 DROP all -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5 packets, 644 bytes) pkts bytes target prot opt in out source destination I ran it and I can still log in, so that's good. Anyone notice anything major out of wack?

    Read the article

  • Nginx/puma rhel unix socket permission error?

    - by Kevin Brown
    When I try to start my puma server, I get the error: /.rvm/gems/ruby-2.1.1/gems/puma-2.9.0/lib/puma/binder.rb:275:in `initialize': Permission denied - connect(2) for "/var/run/nvhbase.sock" (Errno::EACCES) My sites-available/nvhbase.conf file: upstream nvhbase { server unix:/var/run/nvhbase.sock; } server { listen 80 default_server; server_name 207.131.132.219; root /home/vf032500/dev/nvh/public; location / { proxy_pass http://unix:/var/run/nvhbase.sock; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } I don't know a lot about unix sockets and everything works fine using tcp/puma default. My rails app is in my user directory. Is that the problem?? socket is starting in /var/run--I can start in /tmp, but I've heard that's bad practice? Provided I start the server in /tmp, I then can't access it via the server's ip--then what? I'm happy to provide any needed info, I just don't know a whole lot about server/nginx/puma.

    Read the article

  • How can I create a VLAN on my extreme switch for a separate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). Switch is a Summit x450e-48p My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • Which CPU for XEN - LAMP testbed - Budget

    - by deploymonkey
    Dear serverfault knowledgeables, im in a decision dilemma right now, which I can't resolve due to lack of hands on experience. I need to build a testbed for basically virtualizing a LAMP application (os'ses not yet decided) including server side calculations. I'll opt for XEN since it seems better supported by cloud hosters at the moment. The hardware is for a proof of concept for a startup doing saas and might be used for closed live alpha/beta later on. After testing, the testbed might be a) deployed as a colocated white box server b) used as workstation Single socket is enough. We want to have ECC memory for reliability, this excludes most of the consumer line at intel. If intel CPU, then threaded cpu (HT) is preferred have at least 16 gig ram If justified by price and reliability is not too bad, a high quality desktop MB instead of a server MB would be worth a try It came down to the opteron 6128 vs. the xeon 5620 for me after a lot of research, but I don't necessarily have to be right. Which CPU is preferrable, concerning TCO (MB price, power requirements 24/7...) , Opteron 6128 or Xeon 5620? Which one offers better performance in real world applications? (Do You have any other suggestions I probably overlooked?) Thank You for Your consideration

    Read the article

  • Will these instructions work when turning of journaling on an ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • Is reliability reputation of mechanical keyboards overblown?

    - by Rarst
    A while back I worked up to finally buying mechanical keyboard (~$100 range, "black" switches) and was initially quite content with purchase. However just outside first year (read it - as soon as warranty expired) it started to develop repeat issues (press once, get chain of letter repeated) on multiple keys. It doesn't react to generic cleaning (up to compressed air) and searching Internet shows noticeable amount of people with similar-to-identical issues, spanning years. This makes me severely hesitant to buy another mechanical keyboard, considering: every other keyboard I ever owned, including ultra-cheap crap managed to last longer than that typing experience is nice, but not lifechanging-fan-forever nice for me my choice of mechanical keyboards is severely limited not many brands represented in local market and primarily crazy looking gamer models russian (not to mention russian and ukrainian if possible) layout excludes international ordering price tag for a meek year of use I got our of it is plain demoralizing It is obvious mechanical keyboards have their fans, but shopping around for "best fit" or getting into multiple hundreds price tags is probably not something I am highly interested in. Considering my constraints and bad experience with reliability, is it practical for me to sink more money into buying mechanical keyboard(s) again? In other words - manufacturers are beaming about how crazy reliable mechanical keyboards are. Are active long time users of such keyboards confidently of same opinion?

    Read the article

  • Rebuilding LVM after RAID recovery

    - by Xiong Chiamiov
    I have 4 disks RAID-5ed to create md0, and another 4 disks RAID-5ed to create md1. These are then combined via LVM to create one partition. There was a power outage while I was gone, and when I got back, it looked like one of the disks in md1 was out of sync - mdadm kept claiming that it only could find 3 of the 4 drives. The only thing I could do to get anything to happen was to use mdadm --create on those four disks, then let it rebuild the array. This seemed like a bad idea to me, but none of the stuff I had was critical (although it'd take a while to get it all back), and a thread somewhere claimed that this would fix things. If this trashed all of my data, then I suppose you can stop reading and just tell me that. After waiting four hours for the array to rebuild, md1 looked fine (I guess), but the lvm was complaining about not being able to find a device with the correct UUID, presumably because md1 changed UUIDs. I used the pvcreate and vgcfgrestore commands as documented here. Attempting to run an lvchange -a y on it, however, gives me a resume ioctl failed message. Is there any hope for me to recover my data, or have I completely mucked it up?

    Read the article

  • Window Server 2003 Print spooler

    - by mikenardone
    Hello Everyone in ServerFault, I am new to this website. I have been coming here to fix my own problems. I believe everyone here on this website is great. I could not find this issue anywhere. I am sure that other people had this issue. I have IBM X3850 48GB ram 2 TB of hard drives, four NIC cards. 2 Xeon 1.7 CPU. I am running VMware ESX. I believe that was the paid version if not then it is ESXI. I have 7 Servers on this server. All Window server 2003. On one of the Servers I keep on getting CPU is at 100% . So when I go into task manager and look at the processes that is going on, it is my print spooler. I have 30 different HP laserjet printers and two copiers from HP. I believe it is an driver issue, but I can't figure with one is doing this. Is there any programs for window server 2003 that finds bad print drivers.

    Read the article

  • Windows 7 starts getting sluggish over a few days

    - by munrobasher
    Myself and the other developer are running Windows 7 Enterprise 64 bit with 8GB RAM on different Gigabyte motherboards with Quad core Intel CPUs. Most of the time, it runs like a dream. We use VMware workstation a lot (hence the 8GB) and that works well. Except... now and then, after the PCs have been on for a few days, the whole system starts getting really sluggish doing certain tasks. The other's developer's system is far worse than mine with it taking up to a minute to launch IE. Today, mine has gone sluggish but nowhere near as bad. For example, normally when I click on a new tab in IE, it's instant. Today, there's an obvious delay. Right-clicking in this window to trigger iSpell is normally instant, right now it takes about five seconds. I've got resource monitor open on my second monitor and when I did that right-click, there was no obvious peak in CPU, disk or memory. A reboot does fix it so it does sound like a resource issue but haven't a clue what might be to blame. The two computers have similarities (same spec) but also differences (like motherboard, RAM & CPU models). So I guess the question is, any pointers on diagnosing why a PC is sluggish? What could cause such a right-click slow down in IE for example? It sounds like such a simple operation. NOTE: whilst typing this message alone, it was fine performance wise. I can click around the page no problem but right-click still is noticeable slow. Will reboot over lunch... Cheers, Rob.

    Read the article

  • Can Spotlight or Media Browser index metadata contained in iPhoto or Aperture in Mac OS X?

    - by jaydles
    It seems silly to go to all the trouble to assign "Face" data to thousands of photos, but not make it possible to use that data to locate them outside of that application. Is there any way to get Spotlight or Media Browser in OSX (Snow Leopard) to index and recognize metadata (Faces, Places, etc.) contained in iPhoto or Aperture? I know that that metadata is stored in the "library" database for Aperture/iphoto, rather than on the actual files (which is too bad). And I can even potentially see why it might create challenges for spotlight to use it, since spotlight is presumably a file index system, not a media organizer, but surely the media browser used across the other OSX apps is intended to use it? The media browser's whole purpose seems to be to let you easily locate and reference the items you organize in one of the ilife apps (iphoto or Aperture, in this case) from the others (say, imovie, or Mail). It's particularly vexing since the photo app on the iphone sorts by faces by default. Additionally, the mac-based media browser does access smart albums and folders, so you could establish a workaround by creating a smart album for each "face" or place, or tag, and access them that way, but it seems like there must be an easier way. Am I missing something?

    Read the article

  • What are the replacement options for an IDE hard disk for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hard disks have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hard disks are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • SQL Server Installation: Is it 32 or 64 bit?

    - by CapBBeard
    Hi, Recently I was performing an OS upgrade on one of our DB servers, moving from Server 2003 to Server 2008. The DBMS is SQL Server 2005. While reinstalling SQL on the new Windows installation, I went to another of our DB servers to verify a couple of settings. Now, I always thought this second server was Server 2003 x64 + SQL 2005 x64 (from what I'd been told), but I now have my doubts about this. I now suspect that it is in fact only 32 bit SQL, however I'd like to verify this. Here's some details: The OS is definitely 64 bit. xp_msver shows Platform as NT INTEL X86 SELECT @@VERSION shows Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)... However sqlservr.exe is not shown with '* 32' in taskmgr, does anyone know why this is the case, if it is in fact 32 bit as claimed? Despite this, it does seem to be running out of the x86 program files folder. If I do the same checks on a confirmed 64 bit installation, it does give back the expected 64 bit readings, which can only prove that this server in question is only running in 32 bit. Now, that being the case, the question arises about how much memory this '32 bit' install can use. Task manager reports about 3.5GB memory usage for sqlservr.exe (The server has 16GB physical). I suspect that AWE has not been configured at all, and therefore the server will be significantly under-utilised (remembering that the OS is 64 bit) if SQL is simply using a 32bit address space. Is this assumption correct? I feel the server should have SQL reinstalled as 64 bit in order to fully utilise the hardware platform, however it is currently heavily in production; this will be no easy task. I suspect we may just have to configure AWE correctly and let it be for the time being (Unless this is a bad idea?). I apologise that this question is a little vague/lost; I'm no SQL expert, just trying to get a handle on what's going on here.

    Read the article

  • Anonymous Login attemps from IPs all over Asia, how do I stop them from being able to do this?

    - by Ryan
    We had a successful hack attempt from Russia and one of our servers was used as a staging ground for further attacks, actually somehow they managed to get access to a Windows account called 'services'. I took that server offline as it was our SMTP server and no longer need it (3rd party system in place now). Now some of our other servers are having these ANONYMOUS LOGIN attempts in the Event Viewer that have IP addresses coming from China, Romania, Italy (I guess there's some Europe in there too)... I don't know what these people want but they just keep hitting the server. How can I prevent this? I don't want our servers compromised again, last time our host took our entire hardware node off of the network because it was attacking other systems, causing our services to go down which is really bad. How can I prevent these strange IP addresses from trying to access my servers? They are Windows Server 2003 R2 Enterprise 'containers' (virtual machines) running on a Parallels Virtuozzo HW node, if that makes a difference. I can configure each machine individually as if it were it's own server of course... UPDATE: New login attempts still happening, now these ones are tracing back to Ukraine... WTF.. here is the Event: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB4FEB30C) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: REANIMAT-328817 Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 94.179.189.117 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Here is one from France I found too: Event Type: Success Audit Event Source: Security Event Category: Logon/Logoff Event ID: 540 Date: 1/20/2011 Time: 11:09:50 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: QA Description: Successful Network Logon: User Name: Domain: Logon ID: (0x0,0xB35D8539) Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: COMPUTER Logon GUID: - Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: 82.238.39.154 Source Port: 0 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • How do I determine how future-proof and stable a router is?

    - by Aarthi
    I mentioned in my last question that my wireless router had a bad habit of crashing. After consulting with the Super User chatroom, as well as my sysadmin, I've decided I may as well purchase a new router. However, I'm unsure how to evaluate all these tech specs that get touted about. The two things that seem to be the most important to me are: (1) keeping my router future-proof (as standards evolve and change), and (2) ensuring its stability. Unfortunately, I'm not sure what, exactly, I should be looking for in the tech specs or the item description that can give me a good idea of how stable or future-proof my decide will be. What should I look for? Can I determine stability without having to try the device out myself? Please note: I'm not a battle-hardened power user by any means, so I'll likely be reliant on the given firmware for my router. My last router lasted me like four years, so I mostly just want something that'll cover a 500 sqft apartment in New York with minimal crashing, so that I can watch Hulu in peace. And make Skype calls. If it helps, the router models that I'm currently decided between are this ASUS one and this LinkSys one.

    Read the article

  • SWATCH - what am I doing wrong?

    - by Brian Dunbar
    What I want/need/desire is to log when a user logs into my FTP server. Problem: I can't make swatch work the way I should be able to. This data is logged to a file - but of course these logs are not kept very long. I can't keep the logs around forever, but I can extract data from then, analyze it, store results elsewhere. If there is a better way to do this than the following, I'm all ears. Swatch version 3.2.3 Perl 5.12 FTP: VSFTP OS (Test): OS X 10.6.8 OS (Production): Solaris From man I see I can pass contents to a command .. so I should be able to echo those values to file, do a sed/cut/uniq thing on them for stats. $ man swatch (snip) exec command Execute command. The command may contain variables which are substituted with fields from the matched line. A $N will be replaced by the Nth field in the line. A $0 or $* will be replaced by the entire line. Swatch file .swatchrc watchfor /OK LOGIN/ echo=red pipe "echo "0: $0 1:$1 2:$2 3:$3 4:$4 5:$5" >> /Users/bdunbar/dev/ftplog/output.txt" Launch with $ swatch -c /Users/bdunbar/.swatchrc --script-dir /Users/bdunbar/dev/ftplog -t /Users/bdunbar/dev/ftplog/vsftpd.log & Test echo "Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client "206.209.255.227"" >> vsftpd.log Results - it's echoing to TTY. This is not needed or desired on the server, but it does tell me things are working. ftplog *** swatch version 3.2.3 (pid:25780) started at Mon Jul 9 15:23:33 CDT 2012 Mon July 9 03:11:07 2012 [pid 14938] [aetech] OK LOGIN: Client 206.209.255.227 Results - bad! I appear to not be sending the variables to text. $ tail -f output.txt 0: /Users/bdunbar/dev/ftplog/.swatch_script.25780 1: 2: 3: 4: 5:

    Read the article

  • Second monitor stopped being recognized by Windows

    - by Eric J.
    One of the developer PC's running Windows Vista Ultimate had the second monitor stop being recognized in Windows overnight. There were no hardware or driver changes at the time, though I have subsequently updated to the latest nVidia drivers (card is NVIDIA GeForce 210). The non-recognized monitor IS recognized during the boot sequence. In fact, only the "bad" one shows the POST or the Windows loading screen. At some point during Windows initialization after the loading screen disappears and before the logon screen appears, the active monitor switches. Any thoughts? UPDATE: When I open the Vista monitor properties window, I see my primary display and secondary display depicted. The primary one is portrayed as the regular blue box, but the secondary one is portrayed greyed-out. I have the option to "Extend desktop to this monitor", the only resolution is 800x600, and all of the advanced monitor properties are greyed out as well. If I opt to extend the desktop, the greyed-out box turns blue, when I then select Apply the screens flash as usual and I'm given the 15 second countdown to accept the new settings and when I do, everything returns to the previously broken state... secondary monitor is portrayed greyed-out again. At no point is the desktop shown on the secondary monitor.

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Why is my Mac not displaying anything to my LCD tv using HDMI?

    - by Pure.Krome
    Hi folks, I've got an iMac desktop computer. Love it. I wish to connect it to my LCD TV using HDMI. There is no HDMI output on the iMac so i had to buy one of these bad boys :- so now I can output video (via the mini Display Port) and sound (via USB) through this box, to my LCD. Works great ... with a single direct cable. I have another 3 or 5 metre cable inserted into my wall, so i do not have to have a silly hdmi cable floating in the air between my iMac and my LCD TV. When I do this, there is no picture. To better explain all of this, i made a quick video explaining my problem in detail, so you can exactly see what is going on/wrong. I've also tried changing the output format for the TV from 1080i down to 720p and even lower .. incase the cable in the wall doesn't allow 1080i. here's the video with the full explanation :- http://www.youtube.com/watch?v=ZkKRKnRIh6Q (NOTE: I incorrectly said in the video that the hidden wall cable is 10 metres long. me == fail. It's 3m or 5m...). Can someone please watch it and suggest some ideas to getting it working?

    Read the article

  • organizing my music and my itunes

    - by Cawas
    What can we do to organize our music? I've got over 20k items on my iTunes Library, at least 5k with ratings and play counts, apparently just 12k music files and I can't understand how this question have not been properly answered yet. Maybe there is no answer. I have too many duplicates, broken links, bad music, corrupted files... Well, a big mess with no tags! Probably there's no single software capable of just organizing everything, though I'd love one. Hopefully some time in the near future we all will be able to just sync the cloud of our automagically selected music to the newly created offline copy. But meanwhile... Please, do consider I've at least gave a shot (even while not a full test drive) to every single answer linked here already, plus a few more. I'm fine with using other software (mac too, please) to organize, but I'd need it to sync (retrieve and put back) at least iTunes ratings, because of iPhone and smart playlists. Not looking for iTunes replacement. I'm hoping to hear what you hardcore music organizers out there are using as your own solutions! :) I myself am using way too many tools, getting way too little done and end up going song by song.

    Read the article

  • Intermittently uncommunicative subnets

    - by mhd
    Last week proved me a veritable Cassandra: I've always said that it's a bad idea to have only one firewall/router, without a backup or failover. And thus our Cisco PIX went haywire, refusing to route properly. And of course, the only one available here on short notice is me, and while I'm quite grounded in Linux, I'm really a developer not a sysadmin (the fact that this hit me on sysadmin appreciation day is a bit ironic). Anyway, this weekend I tried to hack up a temporary solution: I used an old server with enough NICs (two built-in, four on a card) to serve as a gateway and firewall. Due to some problems with the raid controller, I got only two router distros running, and between Untangle and Ebox I decided for the latter. Now everything is quite okay. I've got all the different subnets we've got here (all with separate switches) talking to each other and even to the internet (Cisco 2800 router, T1 lines). But from time to time (20-60 minute intervals), I get a total routing failure. Our main, office subnet can't talk to our server subnet and can't connect to the internet. This is not the end of a gradual slowdown, either everything's working perfectly or I get a total lack of communication for about two minutes each time. Now I'm a bit at wits end what to check. At least with the default EBox setup, nothing in /var/log shows anything weird and it doesn't exactly have lots of built-in monitoring tools. So I'm hoping someone here could give me some pointers about what to look out for. I did change the ethernet cable from the office switch to the firewall, with no results. I might change switches, although within the switch it seems to work ok enough. Edit: I'm not sure whether this is the sole cause of the problem, but after I noticed a few DHCP entries just before the last drop of connectivity, I tried to reproduce that. And alas, whenever I renew a DHCP connection, I can't access other subnets anymore. Running ISC DHCPD 3.0.6.

    Read the article

  • Automated Syslog Error Solution Finder

    - by Dru
    Any automated syslog solution finding frameworks? I want my central syslog server to email a list of problems, their severity and suggested solutions. There have been several questions about centralising system logs and alternative log analysis systems, but I don't get the impression that any of them help with issue resolution. A little background: At work I am now literally doing the work of two people, and both jobs have expanded beyond their initial frameworks. It is not so bad as I have helpers, but they are little more than smart monkeys. While one of my predecessors [I have two, that is how I know I have the jobs of two people] set-up logwatch to email its results out, my monkeys don't have the skills necessary to identify unimportant data. This has caused all of them, and myself sadly, to setup email filters and ignore the whole thing until something goes "bang". It would be handy to have someone else tell them what is important, what is connected, and to suggest a few ways to resolve the issue (I could train then to research the solution first, ha!). My reading of the Splunk and Octopussy sites indicates that I still need to bring my own highly trained monkey to the party. Which I am several years from having.

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >