Search Results

Search found 7992 results on 320 pages for 'monitor tricks'.

Page 279/320 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • Looking for an application that scrolls or pans netbook screens running Windows

    - by therobyouknow
    I'm looking for a Windows 7 and XP compatible Windows desktop panning/scrolling tool. This is to solve a problem where some applications for example MSN have settings/preference Windows that are not resizeable. I have a Netbook with a small maximum screen resolution e.g. 1024x600. The fixed non-resizeable windows are too large for this display screen size so I cannot see all of the items on these windows, particularly the OK button to save settings. What I would like is a desktop scrolling/panning tool where if I move my mouse pointer to any edge of the display, it pans to show the region of the too-large-fixed window that I could not see. I use a Samsung N110 and Toshiba NB100 netbooks. I'm looking for: A general program that provides desktop panning/scrolling/expanded resolution to allow all regions of a non-resizeable fixed window Preferably a non-graphics hardware specific program but will accept a solution that works with both the above machines I'm NOT looking for (i.e. unsatisfactory answers others have asked that I've already searched and found): Advice on what programs to use that DON'T have the problem of fixed windows Alternative operating system solutions Plugging in an external monitor with larger resolution - I use this option but I need a solution when one is not available, e.g. while travelling etc Advice about not using small screen netbooks - I enjoy the compact convenience of them Advice about change the dpi settings in the Control Panel Display settings Advice about guesswork with the tab key to move the focus the off-screen item I cannot see Thank you in advance.

    Read the article

  • PC dies when running at 100% CPU

    - by user155631
    I recently wrote some Java code to generate images of the Mandelbrot set (fractal). I made use of the new Fork/Join facility in Java 7 to run separate threads on all four cores (2 real, 2 virtual)simultaneously, using a large number of iterations for greater accuracy. The problem is, the process runs fine for about a minute, and then it's as if someone has pulled the plug and the PC just dies. I thought it must be the CPUs overheating, so I ran Real Temp to monitor the temperature. It's an Intel i3 processor. I can see the temperature creeping up to 70 degrees, and then it seems to level off there and run for about another 30 seconds before dying. According to Real Temp, there's still a gap of 35 degrees between the actual temperature and TJ max. I also tried disabling "CPU TM function" in the BIOS, but the problem still occurs. A colleague suggested that it might be a power supply problem, so I borrowed a more powerful PSU (can't remember what wattage it was, but it's higher than mine which is 500W). The exact same thing still happens though. Is anyone able to suggest what the problem might be, or what I can try next?

    Read the article

  • Bandwidth monitoring with iptables for non-router machine

    - by user1591276
    I came across this tutorial here that describes how to monitor bandwidth using iptables. I wanted to adapt it for a non-router machine, so I want to know how much data is going in/coming out and not passing through. Here are the rules I added: iptables -N ETH0_IN iptables -N ETH0_OUT iptables -I INPUT -i eth0 -j ETH0_IN iptables -I OUTPUT -o eth0 -j ETH0_OUT And here is a sample of the output: user@host:/tmp$ sudo iptables -x -vL -n Chain INPUT (policy ACCEPT 1549 packets, 225723 bytes) pkts bytes target prot opt in out source destination 199 54168 ETH0_IN all -- eth0 * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 1417 packets, 178128 bytes) pkts bytes target prot opt in out source destination 201 19597 ETH0_OUT all -- * eth0 0.0.0.0/0 0.0.0.0/0 Chain ETH0_IN (1 references) pkts bytes target prot opt in out source destination Chain ETH0_OUT (1 references) pkts bytes target prot opt in out source destination As seen above, there are no packet and byte values for ETH0_IN and ETH0_OUT, which is not the same result in the tutorial I referenced. Is there a mistake that I made somewhere? Thanks for your time.

    Read the article

  • pdns-recursor allocates resources to non-existing queries

    - by azzid
    I've got a lab-server running pdns-recursor. I set it up to experiment with rate limiting, so it has been resolving requests openly from the whole internet for weeks. My idea was that sooner or later it would get abused, giving me a real user case to experiment with. To keep track of the usage I set up nagios to monitor the number of concurrent-queries to the server. Today I got notice from nagios that my specified limit had been reached. I logged in to start trimming away the malicious questions I was expecting, however, when I started looking at it I couldn't see the expected traffic. What I found is that even though I have over 20 concurrent-queries registered by the server I see no requests in the logs. The following command describes the situation well: $ sudo rec_control get concurrent-queries; sudo rec_control top-remotes 22 Over last 0 queries: How can there be 22 concurrent-queries when the server has 0 queries registered? EDIT: Figured it out! To get top-remotes working I needed to set ################################# # remotes-ringbuffer-entries maximum number of packets to store statistics for # remotes-ringbuffer-entries=100000 It defaults to 0 storing no information to base top-remotes statistics on.

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Server 2008 RAID 5 Write Speeds

    - by Solipsism
    I recently configured a RAID 5 partition in Server 2008 with 4 RAID 5 disks. These disks are connected through a SATA expansion card that uses PCIe. This morning, I checked and they had finally finished synchronizing, and so I tried to do some speed tests. Copying off the disks started pretty much fine - speeds began at 125MB/s, then trailed down to about 70MB/s, which I found odd but not worrying. Writing TO the disks however is a completely different story. I attempted to copy some of my VM host ISOs onto the disks (~2-4 GB apiece) and this resulted in speeds of approximately 10MB/s. I tried copying both from a local disk (connected directly to the motherboard) and from another server ththe gigabit network and results were the same. I checked the performance monitor while transferring the files and the only thing that stuck out was that my memory hard faults shot up to 6,000 per minute (spiking around 200/s) by explorer.exe. The system is running 2GB of DDR667 ECC RAM and a quad-core 2.3GHz opteron. Is there anything I can do to fix this performance issue (buy more RAM? move the drives to a faster box?, etc) or am I just screwed so long as I stick to windows.

    Read the article

  • Windows 7 Paging file apparently not being used

    - by Daniel F.
    I'm running Windows 7 Home Premium 32bit on a mobo with 24GB RAM. Of those 24GB, 20GB are assigned as a RAMDISK via ASRock XFastRAM. This RAMDISK has the drive letter X assigned to it. On X:\ I'm storing the temporary files folder, as well as pagefile.sys. Pagefile.sys has 6GB of size. The X:\ has usually around 14GB free space, so the temporary files are negligible, it's mostly the browsers which are storing their caches on there. Now my issue is that Firefox is crashing a lot on me, no error message pops up, but I know that this is because it's out of memory. I could kind of live with that, but now that I switched from using Eclipse to Android Studio, I know that I'm in trouble, because Java isn't capable of allocating, and Android Studio, together with the Java instances it launches, is quite a memory hog. So I tried to figure out what's wrong, and apparently Windows isn't swapping out memory onto the paging file. While my applications are crashing (firefox) / not starting (java vm's), the paging file is only using constantly around 15% of its size (checked with the performance monitor). 15% equals to 1GB aprox. I know that the correct solution would be to switch to 64 bit Windows, but I had to use the 32 bit version because of driver issues which I had about two years ago, and I guess that I'll have them again if I reformat and install the 64 bit version. Also, the machine is running quite stable, the only issue is the memory, so I'd like to use it as it is (as the apps are installed and configured) Is there a way to make Windows use the paging file more efficiently? None of my processes require more than 1GB, I'd just like it to swap out some seldomly used stuff, like GoogleCrashHandler.exe and stuff like that in order to have "more physical memory avaliable". Is that possible?

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Windows disk change monitoring for malware analysis

    - by SuperDuck
    Not sure if this question belongs to here, because it has some relations with 'serverfault' (system backups) and 'stackoverflow' (software analysis). I'm looking for a solution to monitor disk changes on a Windows system and selectively revert them. It should be able to handle live files like registry parts, so may need to be an offline backup software. It shouldn't silently pass over files which the current admin user doesn't have permissions on (files with no permission entries or owned by the 'system' user) Registry change tracking would be a bonus but is not a requirement I use virtual machines for malware analysis, there is even no solution to list file changes in disk snapshot files (delta VMDK). I currently use Ashampoo for monitoring changes. Though it's the best one between similars, it's not a good software and hasn't really evolved in many 'platinum', 'deluxe' versions released in the last 10 years (it even used non-resizable windows until the latest version). The real problem is it misses some disk / registry changes. Perhaps it only compares modification dates and doesn't catch a change if the dates are preserved. So, I think the solution should compare files using hashes, or file sizes at least. There are numerous backup software out there and I'm sure one can handle this, offline or online.

    Read the article

  • Replacing the LCD panel in a netbook (Asus Eee PC 1005)

    - by neilfein
    Yesterday, I was cleaning up and dropped my Asus Eee 1005PE. The screen is cracked inside (i.e., cracks are visible only when on), and no longer works properly. I booted up with another monitor attached, and the computer itself is fine, but needs a new screen. Best Buy wants at least $250 to repair it (that includes their $150 fee to breathe in the same room as the unit), and Asus was of no help at all. (They're incredibly cagey and won't provide any money numbers at all, not even the cost of the part.) If replacing the LCD is no more trouble than replacing memory or a hard drive, I can do that. It's within my means to buy the part (18G241010402, a TFT LCD), but I'd like to know more about the procedure involved. My question: How does one replace the screen in this unit? Do I simply open the case and swap out the unit, or do I need to disassemble anything else to get to the screen? I don't want to order the part and then end up in a situation like this. Is the case screwed shut, or is it like an iPod where they glue things closed? I know enough about my abilities with a soldering gun to not attempt to solder tiny wires, would any of that be involved?

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Win 8.1 Hyper-v Full Screen and VPN problems

    - by tr0users
    I need to connect to my office using Cisco VPN software (RSA). Once connected all my internet traffic goes through the employer's VPN and this prevents me from listening to spotify. As a way around this I created a Win 2012 VM that I run in hyper-v from my Windows 8.1 Client. First I RDP to the VM, then I connect to the VPN. This forces the RDP session between my host laptop and the VM to close. I then open the hyper-v manager and double-click my VM to get a connection back (not great because I don't get the use of copy & paste this way). Previously when I opened my VM this way I would have full screen. I'm using a 1920x1080 monitor. Today when I re-open my connection to the VM it is displayed in a window that uses maybe 75% of the full screen. I have tried the menu option View\Full Screen Mode only centres the screen and apply black borders around the outside. Could anyone please suggest how I may solve the VPN or Full Screen problems? Thanks Rob.

    Read the article

  • Full-text search locks up database - error 0x8001010e

    - by Stewart May
    Hi We have a full-text catalog that is populated via a job every 15 minutes like so: ALTER FULLTEXT INDEX ON [dbo].[WorkItemLongTexts] START INCREMENTAL POPULATION We have encountered a problem where the database containing this catalog locks up. There are a couple of scenarios, we either see the job execute and the process hang with with a wait type of UNKNOWN TOKEN, or we see another process hang with a wait type of MSSEARCH. Once this happens the job continues to run but informs us that the request to start a full-text index population is ignored because a population is currently active. Looking in the full text log files we see the following error each time these problems occur: 2010-04-21 08:15:00.76 spid21s The full-text catalog health monitor reported a failure for full-text catalog "XXXFullTextCatalog" (5) in database "YYY" (14). Reason code: 0. Error: 0x8001010e(The application called an interface that was marshalled for a different thread.). The system will restart any in-progress population from the previous checkpoint. If this message occurs frequently, consult SQL Server Books Online for troubleshooting assistance. This is an informational message only. No user action is required."'' The only solution is to restart the SQL Server service and then the full text service. This is now occuring on a daily basis now so any help would be appreciated.

    Read the article

  • Dual DC Time Service

    - by poconnor
    I believe I'm having an issue with my Domain Controllers and Time Server. On my back up DC, I keep seeing a warning stating "The time service has stopped advertising as a time source because the local clock is not synchronized." Does this mean that my backup DC believes it's a Time Server? My PDC should be the time server and I have gone through setting up the PDC as the time server. I was not around for the original setup of the time server with the old PDC and Backup DC. But I believe the old PDC was the time server so I setup the new PDC as the new time server, when I decommissioned the old PDC. Is it possible that the Backup DC was setup as the time server and it still thinks it's suppose to be giving out time to everyone? Registry for PDC has NTP Registry for Backup has NT5D5 Results of w32tm /monitor Getting AD DC list for default domain... Analyzing:delayoffset from DC1.local..com Stratum: 4 delayoffset from DC1.local..com Stratum: 3 Warning: Reverse name resolution is best effort. It may not be correct since RefID field in time packets differs across NTP implementations and may not be using IP addresses. DC2.local..com[192.168.1.8:123]: ICMP: 1ms NTP: -0.6349491s RefID: DC1.local..com [192.168.1.9] DC1.local..com *** PDC ***[192.168.1.9:123]: ICMP: 0ms NTP: +0.0000000s RefID: wwwco1test12.microsoft.com [65.55.21.20]

    Read the article

  • Half of installed RAM is hardware reserved

    - by user968270
    After a rather arduous and convoluted series of problems that left me without a desktop for ~80 days, I've finally got the thing up and running, having replaced the power supply, motherboard, graphics card and CPU. Now, however, I'm experiencing the 'hardware reserved RAM' issue. Perhaps this is the exhaustion talking, but looking at the question that tends to get pointed to when this kind of topic gets locked as a duplicate hasn't helped. I have 16 GB of RAM installed in an MSi 970A-G46, which is spec'd for up to 32 GB of RAM. The BIOS recognizes that I have 16 GB installed, and the resource monitor also shows the whole 16 GB, only it shows 8 GB as hardware reserved. I've seen suggestions that it's an OS issue, but the particular installation of Windows 7 (64-bit) which I'm running on my boot drive is the same as the one that could actually access the 16 GB in my previous motherboard (MSi 870A-G54). I've updated my BIOS using the MSi Live Update tool and restarted the machine with no effect, and I cannot seem to locate any 'Memory Remapping' option as I've seen mentioned. I've physically swapped the RAM between the slots to no effect. I've unchecked the Maximum Memory box in the msconfig Boot tab's advanced options, also to no effect. These are my system's basic specifications OS: Windows 7 Home Premium (64-Bit) Motherboard: MSi 970A-G46 CPU: AMD FX-8150 Graphics Card: XFX Radeon HD 6870 Boot Drive: OCZ Agility 3 Storage Drive: Samsung Spinpoint F3 ST1000DM005/HD103SJ 1TB PSU: Thermaltake TR-2 TR600 600W ATX12V v2.3

    Read the article

  • Rkhunter reports file properties have changed

    - by CountMurphy
    I am running a fully updated LTS copy of Ubuntu server. Today I ran rkhunter (as I do from time to time). This is the output I got: Warning: The file properties have changed: [15:52:25] File: /bin/ps [15:52:25] Current hash: f22991ec93ae966c856d367f42fc3d8a484bd827 [15:52:25] Stored hash : 1892268bf195ac118076b1b0f53e7a637eb6fbb3 [15:52:25] Current inode: 142902 Stored inode: 130894 [15:52:25] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:25] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:33] File: /usr/bin/ldd [15:52:33] Current hash: f1e2ca5aa3a28994e2cebb64c993a72b7d97b28c [15:52:33] Stored hash : 295d9cedb121a5e431a39a6d201ecd7ce5640497 [15:52:33] Current inode: 2236210 Stored inode: 2234359 [15:52:33] Current size: 5280 Stored size: 5279 [15:52:33] Current file modification time: 1331165514 (07-Mar-2012 16:11:54) [15:52:33] Stored file modification time : 1295653965 (21-Jan-2011 15:52:45) Warning: The file properties have changed: [15:52:37] File: /usr/bin/pgrep [15:52:37] Current hash: 3eada9a96760f3e2c9111cfe32901d1432813c1d [15:52:37] Stored hash : ce265d0db9964b173fe5036f703a9b8d66e55df3 [15:52:37] Current inode: 2229646 Stored inode: 2224867 [15:52:37] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:37] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:41] File: /usr/bin/top [15:52:41] Current hash: 6be13737d8b0950cea2f1ae3a46d4af713dbe971 [15:52:41] Stored hash : c7b495ecef3982eeb6f08a511861b1a1ae8775e6 [15:52:41] Current inode: 2229629 Stored inode: 2224862 [15:52:41] Current file modification time: 1324307913 (19-Dec-2011 07:18:33) [15:52:41] Stored file modification time : 1260992081 (16-Dec-2009 11:34:41) Warning: The file properties have changed: [15:52:53] File: /usr/sbin/cron [15:52:53] Current hash: e783ca973f970aa8a4bf5edc670e690b33914c3d [15:52:53] Stored hash : 4718257a8060736b9058aed025c992f02a74a5a7 [15:52:53] Current inode: 2224719 Stored inode: 2228839 [15:52:54] Current file modification time: 1330965568 (05-Mar-2012 08:39:28) There were also a few other I left out. Has my server been rooted? I am running fail2ban and do monitor failed ssh logins. nothing has come up. Could someone compare these hashes to their copy of Ubuntu Server (lts)? Please tell me these are false positives..... Edit: is something else like rkhunter I can run for a second scan?

    Read the article

  • Starting old computer - nothing shown on screen at boot

    - by Jonas
    I'm trying to start an about 10 years old PC computer. But nothing is shown on the screen, and it beeps everytime I press a key on the keyboard. I can press Ctrl+Alt+Del to reboot the computer. The monitor is newer and seem to work with other computers. I don't see anything from POST/BIOS at start or later. I have tried to change to another graphic card, but it didn't change anything. What can I do to solve this problem? Update: I have now tried with another computer (the one where the "another graphic card" came from) and I got the same problem. I doesn't show anything on the screen. Both these computers had a GeForce2 MX 400 graphic card. I tried with another computer screen it didn't work - it was showing "signal out of range". So is the graphic card GeForce2 MX 400 too old for newer TFT-monitors? I tried with a third computer so I know that the monitors are working, and both monitors work fine with that computer.

    Read the article

  • Nagios DNX plugins

    - by danneh3826
    I'm toying with the idea of multiple Nagios instances setup to monitor our infrastructure. I've looked at all the various methods of distributed Nagios checks, and I think DNX comes out the closest. DNX handles failure of worker nodes, that's fine. What happens if the main DNX server fails though? Is there a way to replicate the server too? I'm using AWS EC2 primarily, so I can utilise Elastic Load Balancing for the web UI, but I need to be able to handle the AZ where the monitoring server is to fail over, and essentially for a second to pick up the checking load (active/passive, active/active, so long as it doesn't fail completely) The other thing I'm trying to solve is an issue with routing. What I'd like is to have multiple nodes report a fault before Nagios confirms it as critical. Not the NRPE checks, as they're pretty self explanitory, but things more like check_ping. I often have routing issues out of AWS to certain datacenters, so Nagios can often report bad/no ping/timeout as a critical issue, even though the machine in question is working fine. Would it be possible to have a setup where a worker complains a service check is critical, and have a second worker node (positioned in another datacenter/AZ) also report the service as critical before the Nagios central server issues a critical alert? I realise I might be asking a bit much (how far down the line do you go setting up failover systems before it starts to get ridiculous), however surely someone must have thought of this scenario when developing DNX?

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • Get Matrox Millenium video card working in Ubuntu 9.10

    - by wcoenen
    I have installed Ubuntu 9.10 on an old PC and it is mostly working, except for some heavy drawing defects that show up whenever I start dragging a window or scrolling inside a window or menu. It looks like the video driver copies the rectangle being moved to the wrong location. I have taken a look in /var/log/Xorg.0.log and the following line shows the detected video card: (--) PCI:*(0:0:8:0) 102b:0519:0000:0000 Matrox Graphics, Inc. MGA 2064W [Millennium] rev 1, Mem@ 0xf9800000/16384, 0xfb000000/8388608, BIOS @0x????????/65536 (==) Using default built-in configuration (30 lines) (==) --- Start of built-in configuration --- Section "Device" Identifier "Builtin Default mga Device 0" Driver "mga" EndSection How do I fix the drawing defects? It turned out that the 24 bit color depth (automatically selected by ubuntu 9.10) was the problem; apparantly the mga driver doesn't handle this well for cards with little memory. I took the following steps to resolve the issue (you can skip the first three steps if you already have a semi-working xorg.conf file): Reboot ubuntu in recovery mode, to get a root console without X running. Run Xorg -configure to generate a xorg.conf.new file Copy the file to /etc/X11/xorg.conf with cp xorg.conf.new /etc/X11/xorg.conf (assuming it didn't exist yet; that's why I generated it) Open the new config file with sudo nano /etc/X11/xorg.conf and make sure the screen section is configured for 16 bit color depth like this: Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 16 SubSection "Display" Viewport 0 0 Depth 16 Modes "1024x768" EndSubSection EndSection I can't guarantee those were the only important changes I made - I tried a few things in my attempts to create a valid xorg.conf file. But I'm pretty sure that the screen section was the important part.

    Read the article

  • Extend Linux Desktop to another X Windows Display

    - by unknown (google)
    Hello, I am a long time Linux user of the Xinerama and other technologies for extending a desktop to multiple monitors. However when I travel with my laptop I miss the multi-monitor support I enjoy at home. Recently I acquired a second laptop for a low price. Both laptops are running Fedora (versions 10 and 11 respectively). I use Gnome as my primary desktop environment. I know about synergy. I use synergy all the time to control the screen of other Windows / Linux systems I use. I would like to know, can I sit both my primary and secondary laptops together and achieve a Xinerama-like extended desktop environment? Ideally I would like to start a GNOME session on my primary laptop. And then start a X-Windows Desktop on my secondary laptop and extend my primary laptop's desktop onto it. I would like to be able to move Windows from the primary desktop to the secondary laptop desktop. Would I need to use synergy to do this with some other bit of X-Windows technology? Or is there X-Windows technology that will do all this for me? I am familiar with X Windows ability to display applications remotely. I am also familiar with Nomachine's NoX.

    Read the article

  • Windows Server 2003 IPSec Tunnel Connected, But Not Working (Possibly NAT/RRAS Related)

    - by Kevinoid
    Configuration I have setup a "raw" IPSec tunnel between a Windows Server 2003 (SBS) machine and a Netgear FVG318 according to the instructions in Microsoft KB816514. The configuration is as follows (using the same conventions as the article): NetA | SBS2003 | FVG318 | NetB 10.0.0.0/24 | 216.x.x.x | 69.y.y.y | 10.0.254.0/24 Both the Main Mode and Quick Mode Security Associations are successfully completed and appear in the IP Security Monitor. I am also able to ping the SBS2003 server on its private address from any computer on NetB. The Problem Any traffic sent from a computer on NetA to NetB, or from SBS2003 to NetB (excluding ICMP Ping responses), is sent out on the public network interface outside the IPSec tunnel (no encryption or header authentication, as if the tunnel were not there). Pings sent from a computer on NetB to a computer on NetA successfully reach computers on NetA, but the responses are silently discarded by SBS2003 (they do not go out in the clear and do not generate any encrypted traffic). Possible Solutions Incorrect Configuration I could have mistyped something, somewhere, or KB816514 could be incorrect in some way. I have tried very hard to eliminate the first option. Have re-created the configuration several times, tried tweaking and adjusting all the settings I could without success (most prevent the SA from being established). NAT/RRAS I have seen multiple posts elsewhere suggesting that this could be due to interaction between NAT and the IPSec filters. Possibly the NetA private addresses get rewritten to 216.x.x.x before being compared with the Quick Mode IPSec filters and don't get tunneled because of the mismatch. In fact, The Cable Guy article from June 2005 "TCP/IP Packet Processing Paths" suggests that this is the case, (see step 2 and 4 of the Transit Traffic path). If this is the case, is there a way to exclude NetA-NetB traffic from NAT? Any thoughts, ideas, suggestions, and/or comments are appreciated.

    Read the article

  • Second CPU missing of Dual Core

    - by Zardoz
    My Lenovo T61 has a dual core CPU. I just noticed that under Ubuntu 10.10 only one CPU is recognized. I know that once both CPUs worked. Not sure since when the second CPU is missing. Maybe since the last kernel update. Currently I am using linux-image-2.6.35-23-generic (for x86_64). What can I do to enable the second CPU again? Here the ouput of /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T8100 @ 2.10GHz stepping : 6 cpu MHz : 800.000 cache size : 3072 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 4189.99 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Any help is welcome. I really need that CPU power for my work here.

    Read the article

  • How can I confirm that burn process was completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. my PC is able to read the CD and CD is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • How can I confirm that burn process is completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. I my PC is able to read the cd and it is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >