Search Results

Search found 7940 results on 318 pages for 'intel wireless'.

Page 288/318 | < Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >

  • How can I make XAnalogTV fill my screen?

    - by Breakthrough
    I recently installed xscreensaver, as well as the additional/extra screensavers. Many of the OpenGL ones function correctly, going fullscreen as expected. However, for some reason, the XAnalogTV screensaver leaves two "blank" spots on the edges of my screen. If I manually launch XAnalogTV, it displays a window, which it fills correctly. When I maximize the window, the same effect occurs: the window maximizes, but the two edges of the screen are literally "transparent". This effect also occurs when the screensaver is set to fullscreen. For these reasons, I believe the problem may be related to the aspect ratio of the screen. The edges of the screen are literally "ignored", with nothing being drawn there. Specifically, note the transition between the maximized and full-screen screenshots (with the un-drawn whitespace shrinking as the vertical height has been increased). For reference, I am running Xubuntu 12.04 on a Dell Vostro 1520 (Intel P8600, Nvidia 9300M) with a 1440 x 900 display (16:10). I have also set the GetViewPortIsFullOfLies preference to true. Is there any way to force XAnalogTV to fill my entire screen? Relevant screenshots (windowed, maximized, and full-screen, respectively):

    Read the article

  • Troubleshooting: Monitor never turns on, system fans running, DVD-ROM does not open.

    - by Wesley
    Hi all, Here are my specs beforehand: ECS P4VXASD2+ (V5.0) motherboard FSB 533MHz Intel Pentium 4 2.40A GHz Prescott Socket 478 2x 256MB PC2100 DDR RAM, 2x 256MB PC133 SDRAM CoolMax 350W PSU DVD-ROM - will edit with brand & model 128MB ATi Radeon 9800 Pro AGP No hard drive So, I just put those parts together today and I tried to power it up, with the monitor connected to the Radeon 9800 in the AGP slot (mobo does not have VGA port). After turning it on, the CPU fan, graphics fan and system fan go on. However, the monitor remains in standby mode, despite being plugged in. Also, after pushing the button on the DVD-ROM drive, it does not open. I've used the DVD-ROM drive before with absolutely no issues. The graphics card was slightly buggy when I put it on another machine, which was left outside in winter weather for 3 months. (Still that computer's integrated graphics worked fine.) CMOS battery was replaced and jumpers are all set correctly. Now, I'm wondering whether the motherboard, CPU, PSU or GPU is the problem. What can I do to test which part is the problem? Just to clarify, I don't have a hard drive, so I usually boot Ubuntu from the disc drive. Anyways, thanks in advance!

    Read the article

  • need advice on data center move, communication with both facilities during transition

    - by Brian Roden
    We are beginning the process of moving to a new facility. Office and warehouse operations will both be moving, and we must get shipping operations up and running at the new location while continuing to ship from the old location. Our contract with some third-party warehouse tenants requires two business day turnaround (only weekends and holidays excluded), so we can't have major downtime during the move. We would like to keep our 172.16.60/61.xxx internal address space in use throughout the move. Is it possible to keep using this same internal range, and have our existing WatchGuard Firebox 520 and whatever router we get for the other location (preferably the same model) just treat both locations as one network, leaving our host IPs the same throughout the move? Renumbering the servers when they move isn't a big deal, but our wireless terminals for order picking in the warehouse have fixed IPs (and a fixed IP, non-DNS reference to the host they speak with) and would be a massive undertaking to reconfigure when the servers move (each device would have to be reconfigured at least 2 times -- some when we start using them in the new building and the host is still here, all of them in both locations when the host moves to the new building, and the rest when they finally make the move to the new building). We're trying to avoid that if possible.

    Read the article

  • Loss of network connectivity when playing video on Optoma HD180 projector

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas?

    Read the article

  • PC shut downs automatically after 10-20 second. No POST screen, no beeps

    - by emzero
    I have this not-so-old computer that's not being used for a year or so. Specs: Motherboard: ASUS PN5-E SLI CPU: Intel Core2Duo E4300 RAM:2x2GB SuperTalent DDR2-800 VGA: Zogis GeForce 7950GT PSU: Vitsuba San-55-S 550w HD: No hardrives yet When I power on the computer, everything seem to start, but right away the whole system shuts down. I've removed and changed the RAM sticks, take out the VGA, everything I could think of. So what could it be causing this? The PSU? The motherboard is dead? The CPU? Any help to isolate the problem will be useful. Thanks PS: Please don't close the question, this could be helpful to anybody having a similar problem, even with different hardware. UPDATE I've removed the old thermal paste and apply a brand new one. I also cleaned every dust using a high pressure gas dust remover. Checked for bad capacitors, all of them seem ok. Opened the PSU, removed big giant dust balls, cleaned with high pressure dust remover. Still the same problem, but now it stays powered on for almost 20 seconds maybe. But no POST screen, no beeps at all, nothing. So I suspect it's a motherboard or PSU failure. Unfortunately I don't have an energy tester to test the PSU... Don't know what else to try. I don't have another 775-motherboard to test the CPU, RAM and VGA with it.

    Read the article

  • Cannot set video resolution above 640x480 after installing Windows XP SP2

    - by waanders
    I've installed Windows XP SP2 on a computer (there was not SP at all). Now the display settings are set back to 640x480 and 4 bits colors. And I can't change it, it's the only option in Settings tab of the Display dialog of Windows. The screen look awful now, how can I solve this problem? UPDATE: Seems to be a problem with the video driver (thanks @Karan and @Hennes). I did run Speccy (PC-Wizard freezes the computer) and this is a part of the log file: Summary Operating System Microsoft Windows XP Professional 32-bit SP3 CPU Intel Celeron Willamette 0.18um Technology RAM 512 MB DDR @ 133MHz (2.5-3-3-6) Motherboard COMPAQ 0838h (FC-478) Graphics Standard Monitor (640x480@1Hz) Hard Drives 19.0GB Maxtor 2B020H1 (PATA) Optical Drives No optical disk drives detected Audio No audio card detected ... Graphics Monitor Name Standard Monitor on Current Resolution 640x480 pixels Work Resolution 640x450 pixels State enabled, primary Monitor Width 640 Monitor Height 480 Monitor BPP 4 bits per pixel Monitor Frequency 1 Hz Device \\.\DISPLAY1 OpenGL Version 1.1.0 Vendor Microsoft Corporation Renderer GDI Generic GLU Version 1.2.2.0 Microsoft Corporation Values GL_MAX_LIGHTS 8 GL_MAX_TEXTURE_SIZE 1024 GL_MAX_TEXTURE_STACK_DEPTH 10 GL Extensions GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture GL_EXT_bgra

    Read the article

  • Windows 7 boot problem on a Lenovo Thinkpad Z61m 9450HAG

    - by Matt Taylor
    I recently did a full upgrade of Windows 7 on my Thinkpad. Everything worked fine after up until the second reboot (the first reboot after some updates installed worked OK). At second reboot time the system would just black screen before the Windows logo appears. Disk/wireless/power/battery lights are all lit and the disk light is active (flickering). However, if I remove my battery and boot with just power it boots fine and quickly, and everything is OK. Any help on why this won't boot with battery plugged in is greatly appreciated. I need to take this battery out on the road/trains, etc. A little more detail on this story. The battery I had inserted when doing the (failed) boot was a long life battery. I have not tried inserting this battery when Windows is logged in. I have another (normal life) battery that I have charged up within Windows. It has just got to 100% and I am about to reboot with it in. I am using the Lenovo power manager to diagnose the battery - all seems OK. I will report back shortly as to the outcome. OK, so I chose the reboot option from within Windows, the machine seemed to shutdown okay, but then stalled. It didn't turn off completely and didn't reboot, but just sat, with the fan humming, somewhere in between! I had to hold the power button in for a few seconds until the fan stopped and then hit the power button again to boot the machine from fresh. One good thing, with this battery (the normal one) it booted into Windows 7 the first time with a battery! So, now I have rebooting issues. I have 3 errors in the event log: A timeout was reached (30000 milliseconds) while waiting for the lxdxCATSCustConnectService service to connect. The lxdxCATSCustConnectService service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion. The following boot-start or system-start driver(s) failed to load: cdrom Any thoughts?

    Read the article

  • mod_rpaf with apache error_log

    - by Camden S.
    I'm using mod-rpaf with Apache 2.4 and it's working properly (showing the real client IP's) in my Apache access_log... but not in my error_log. My error log just shows the client IP address of the proxy server (my load balancer in this case) Here's an example of what I see in my error_log where 123.123.123.123 is the IP of my load balancer/proxy. == /usr/local/apache2/logs/error_log <== [Tue Jun 05 20:24:31.027525 2012] [access_compat:error] [pid 9145:tid 140485731845888] [client 123.123.123.123:20396] AH01797: client denied by server configuration: /wwwroot/private/secret.pdf The exact same request produces the following in my access_log where 456.456.456.456 is a real client IP (not the IP of the load balancer). 456.456.456.456 - - [05/Jun/2012:20:24:31 +0000] "GET /wwwroot/private/secret.pdf HTTP/1.1" 403 228 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:12.0) Gecko/20100101 Firefox/12.0" Here's my httpd.conf entry: # RPAF LoadModule rpaf_module modules/mod_rpaf-2.0.so RPAFenable On RPAFproxy_ips 127.0.0.1 123.123.123.123 RPAFsethostname On RPAFheader X-Forwarded-For What do I need to do to get the real IP addresses showing in my Apache error_log?

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • How do I stop panning on a monitor that supports a specific resolution?

    - by IronicMuffin
    Hi all, I've been battling this for a few days now. Any and all help is appreciated. I have a planar monitor with a native res of 1280x1024. At one point, I had used PowerStrip to override "something" and set the res to 1600x1200, and it worked great. I then installed new intel graphics drivers for my 86895g (or w/e model) video card, which screwed up whatever settings I had. If I set it to 1600x1200 this time, it would set the res correctly, but give me a 1280x1024 viewport and the screen would pan when the mouse got to the edges of the screen. Absolutely not useful. Ok, so I was limited to 1280x1024 now. W/e. Now...enter new video card with two video ports. I have two monitors now and the latest nVidia drivers. I decide to try to get dual 1600x1200 going...ended up screwing the original monitor up so much now that it's at 1280x1024, with a 1024x768 viewport and panning! Absolutely not usable now. So what I need, and I can't seem to find on any forums, is help doing one or more of the following: Clearing out all monitor/edid info out of the windows registry without corrupting the registry. Actually correctly override the EDID values and get my sweet res back. Some other way of getting back to at least dual 1280x1024 with NO panning. Note: My device manager shows 4 monitors for some reason. My registry shows entries for all sorts of monitors that have been hooked up to the machine over the years. It's making it difficult to debug. Experience with PowerStrip would be helpful. I've been mucking with Phoenix EDID designer and MonInfo as well, but I'm stumbling around in the dark with these. Windows XP SP2 nVidia GeForce 6200 nVidia drivers: v258.96 Monitor: Planar PL 1910M Thanks!

    Read the article

  • Private staff network within public network

    - by pianohacker
    I'm the sysadmin at a small public library. Since I got here a few years ago, I've been trying to set up the network in a secure and simple way. Security is a little tricky; the staff and patron networks need to be separated, for security reasons. Even if I further isolated the public wireless, I'd still rather not trust the security of our public computers. However, the two networks also need to communicate; even if I set up enough VMs so they didn't share any servers, they need to use the same two printers at the very least. Currently, I'm solving this with some jerry-rigged commodity equipment. The patron network, linked together by switches, has a Windows server connected to it for DNS and DHCP and a DSL modem for a gateway. Also on the patron network is the WAN side of a Linksys router. This router is the "top" of the staff network, and has the same Windows server connected on a different port, providing DNS and DHCP, and another, faster DSL modem (separate connections are very useful, especially as we heavily depend on some cloud-hosted software). tl;dr: We have a public network, and a NATed staff network within it. My question is; is this really the best way to do this? The right equipment would likely make my job easier, but anything with more than four ports and even rudimentary management quickly becomes a heavy hit on our budget. (My original question was about an ungodly frustrating DHCP routing issue, but I thought I'd ask whether my network was broken rather than asking about the DHCP problem and being told my network was broken.)

    Read the article

  • What can I do to prevent system power downs?

    - by Joe King
    Yesterday I was given my brother's old laptop - core i7, 2.67GHz, 8GB RAM, 128GB SSD, Win7 64 bit. It's a Sony Vaio Z11. Approx 18 months old. When running something computationally intensive, the fan starts up and after about 30 secs it just powers itself down with no warning. I guess it is overheating. There is nothing in the event logs to suggest what is causing it - the only thing I see is "the last system shutdown was unexpected" or something similar. This is a problem for me because I use a lot of number crunching apps, which pretty much makes it useless to me. I would like to know if there is anything I can do, other than the obvious things I've done already - open up and clean out dust, re-install the OS. According to my brother, this problem started about 6 months ago when it was already outside warranty. If it's just used for simple things - web browsing, word processing etc, the problem does not occur. Any ideas for what I can do to fix this ? Update: I found that the laptop has 2 hardware settings for graphics: Speed and Stamina - the Speed setting seems to use an nvidia GEforce GT 330M, while the Stamina setting uses an Intel chipset. With the setting on Speed, I can hear the fan the whole time, and the system powers down after a short while (5-10 mins) even just doing basic tasks (browsing this site for example), but doesn't shut down if I just leave it switched on. In this mode it also sometimes just freezes the screen and I have to power off myself. However on Stamina setting it only powers down when doing number crunching and never freezes the screen.

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • Is UPS worthwhile for home equipment?

    - by Jon Skeet
    Over the years, I've had to throw away a quite a few bits of computing equipment (and the like): Several ADSL routers with odd symptoms (losing wireless connections, losing wired connections, DHCP failures, DNS symptoms etc) Two PVRs spontaneously rebooting and corrupting themselves (despite the best efforts of the community to diagnose and help) One external hard disk still claiming to function, but corrupting data One hard disk as part of a NAS raid array "going bad" (as far as the NAS was concerned) (This is in addition to various laptops and printers dying in ways unrelated to this question.) Obviously it'll be impossible to tell for sure from such a small amount of information, but might these be related to power issues? I don't currently have a UPS for any of this equipment. Everything on surge-protected gang sockets, but there's nothing to smooth a power cut. Is home UPS really viable and useful? I know there are some reasonably cheap UPSes on the market, but I don't know how useful they really are. I'm not interested in keeping my home network actually running during a power cut, but I'd like it to power down a bit more gracefully if the current situation is putting my hardware in jeopardy.

    Read the article

  • Missing MB on a GPT partioned SSD

    - by pisswillis
    I recently installed Arch Linux on an Intel 40GB SSD. I used GPT for partioning (via GNU parted) and created the following partions: /dev/sda1 : 1 MB, no FS, flag=bios_grub /dev/sda2 : 30MB, /boot, ext2, flag=boot /dev/sda3 : 20GB, /home, ext4 /dev/sda4 : ~20GB, /, ext4 After struggling to install grub2 from the livecd environment (which I finally did via grub-install /dev/sda --root-directory=/mnt/ --no-floppy --force) I got a working system. However, when I was inspecting disk usage with df I noticed that my home partition had around 170MB of used space on it. This surprised me because the only things on /home were one users .bashrc, .bash_history, and .lesshst. du confirmed that there was only a few KB of space being used on /home. Why does df report approximately 170MB being used when du does not? Is this space "gone forever", or can I regain it by repartioning and/or reinstalling? When I installed grub2 it said something along the lines of "your embed area is too small", and that I could "use BLOCKLISTS, but BLOCKLISTS are UNRELIABLE". In the end the only way I could get a system booting from the SSD was to use blocklists via the grub-install --force flag. Is this related to the mysterious missing 170MB? Thanks

    Read the article

  • My client's solution of a Windows SBS 2011 VM on an Ubuntu host and VirtualBox is pinning the host CPU

    - by Scott Stamp
    Here's my situation, I've got a client hosting two servers (one VM), with the host providing VMware Zimbra, the other Windows Small Business Server 2011. Unfortunately, the person before me had configured this setup as follows. Host: Ubuntu Desktop Edition 10.04 (I know, again, not my choice) running VMware Zimbra 8GB of RAM On-board RAID1 of two 320GB Seagate Barracuda drives for the OS Software RAID5 of four 500GB WD Caviar Black drives on MDADM for bulk storage (sorry, I don't know the model #) A relatively competent quad-core Intel Core i7 CPU from the Nehalem architecture (not suspicious of this as the bottleneck) Guest: Windows Small Business Server 2011 4GB of RAM Host-equivalent CPU allocation VDI file for OS hosted on the on-board RAID, VDI file for storage hosted on the on-board RAID For some reason when running, the VM locks up when sitting nearly idle, and the VirtualBox process reports values of 240%+ in top (how is that even possible?!). Anyone have any ideas or suggestions? I'm totally stumped on this one. Happy to provide whatever logs you'd like to take a look at. Ideally I'd drop VirtualBox and provision this with VMware Workstation, but the client has objected to the (very nominal) costs involved. If hardware needs to be purchased to help, it will be, but we're considering upgrades a last-resort at this time. Thanks in advance! *fingers crossed*

    Read the article

  • Process to replace motherboard and keep CPU

    - by jolivier
    My motherboard has been diagnosed with the Sandy Bridge issue (http://vip.asus.com/eservice/changeSandybridge_MB.aspx?slanguage=en-us) so I am asked by my reseller to send back my motherboard to have a new one compatible with the previous one. My problem is that I have a not cheap Intel CPU currently on it, with its standard heatsink/fan. I would obviously like to keep it to plug it on the new motherboard. I am quite woried about the thermal paste. I was planning to: Remove the CPU and the HSF together (I think they are sticked to each other). Try to separate the CPU and the HSF (I'm not sure how) Clean both of the surfaces When the new motherboard is here, put the CPU back on it. Have new thermal paste to put again on the CPU, put it on the CPU Add the HSF again Do you see any problem about this process? Recommendations? Is it possible to keep the CPU and the HSF together for the whole process or is it impossible to plug the CPU back on the new motherboard in this case? Thanks in advance for your answers. Olivier

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by Bluebill
    I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: fsck from util-linux-ng 2.16 There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots?

    Read the article

  • Centos Server/MySQL server problem

    - by Jake
    Hello all, I currently run a website we get about 15,000-20,000 hits a day. We currently run a very active forum, that is hosted using Vbulletin software. We have 4.5 Million Posts, 80,000 Threads, with about 11,000 members of which just under a third is active all the time. Now I am running a Intel Xeon Quad Core (2.13Ghz) with 4GB of RAM, Centos 5.5 and running DirectAdmin on the box to manage it. I also run the current stable version of Apache, MySQL, and php. This is the only site that is hosted on this machine. Now during random times of day sometimes when it gets busy the server load can get to like 20, but this can also happen when we only have like 200 users active too. I dont understand what is causing these problems. Sometimes I get pages that can generate in .2 seconds other times it takes like 5-8 seconds. I have customized the my.cnf file and that has not helped out anything, I didnt know where else to turn so if anyone has any suggestions please let me know. Thank You In advance.

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • Windows 7 restarts while being idle

    - by Ondrej Slinták
    My Windows 7 almost always restarts when I keep it idle for ~20-30mins. It happened randomly before, but lately, if I leave the computer I can be sure it's gonna restart after those 30mins. It never happens when I play games or work tho, just when it's idle. It's a fresh install of Windows 7 64bit. I had also problems while installing it, it always crashed while finalizing the install and I had to reinstall again. Eventually it installed on 3rd or 4th try after I deleted all of my partitions and added them again. I thought it might have been a hardware problem, but temperatures seem to be okay and I have no idea how to track what might have been causing it. Any ideas? I'm running Windows 7 64bit on: Gigabyte EX58-UD4P Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz NVIDIA GeForce GTX 260 6GB of DDR3 1066Mhz RAM WDC WD1001FALS-00J7B0 1TB SATA II I have a very bad feeling it might be something with HDD and its compatibility with Windows 7 as I haven't had those problems for 1 year while I had Vista. Edit: I checked Event Viewer critical errors from this night. PC restarted first time at 11:12pm, then at 3:06am and since then every ~20min until I came back to it. Error message is: The system has rebooted without cleanly shutting down first. This error could be caused if the system stopped responding, crashed, or lost power unexpectedly. Source: Kernel-Power

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • Is it possible to have DisplayLink USB display hotplugging with Xorg 1.13 on kernel 3.4?

    - by lkraav
    keithp seems to be the only one on the interwebs to have written anything about the subject and he worked with 3.5_rc. I don't want to go above 3.4 at the moment for various stability reasons and am trying to see whether I can get this to work. Xorg 1.13 recognizes the display on connection, "udl" module is loaded, xorg-video-modesetting driver also loads, display lights up. So everything seems to be good. I emerged xrandr-9999 (not many changes on top of 1.3.5): $ xrandr --listproviders Providers: number : 2 Provider 0: id: 69 cap: 0x0 crtcs: 2 outputs: 4 associated providers: 0 name:Intel Provider 1: id: 338 cap: 0x0 crtcs: 1 outputs: 1 associated providers: 0 name:modesetting But I can't get any further, just like this guy: $ xrandr --setprovideroutputsource 338 69 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Value in failed request: 0x152 Serial number of failed request: 11 Current serial number in output stream: 12 $ xrandr --setprovideroutputsource 1 0 X Error of failed request: 148 Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Serial number of failed request: 11 Current serial number in output stream: 12 Any thoughts?

    Read the article

< Previous Page | 284 285 286 287 288 289 290 291 292 293 294 295  | Next Page >