Search Results

Search found 19908 results on 797 pages for 'bit ly'.

Page 592/797 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • Switching from Onboard intel to Nvidia Dedicated GPU

    - by Anarkie
    How can I switch from Intel onboard grpahics to Nvidia Dedicated GPU? When I go to windows screen resolution I see intel. I cant change it. I go to Device Manager, I see both Adapters are there and Nvidia is known.I disabled intel, I didnt see any option to set one as primary so I disabled intel, black screen!Reboot and re-enable intel. I right click on the desktop, choose "Nvidia Control Panel" and on 3D options I chose the desired game I want to play, High performance Nvidia, but it didnt switch when I started the game. Then I made preferred GPU in the global settings High performance Nvidia for everything it still didnt change.I understand to save the battery etc. there is a switch option between these two but I dont see this switch when it is necessary, I cant also switch manually?Is there a manual switch FN key?I looked but couldnt find. Why I want to do this? 1) Better game peformance. 2) I want to play an old game from 2002(Diablo 2 LOD), when I start the game there are black bars on the sides, so screen becomes just smaller which I dislike!I heard this is intel's specification to center the display.But instead I would like to scale or expand it to fit widescreen(fullscreen).Which should be possible with Nvidia. My Notebook Specs: Fujitsu Lifebook AH531, Win7 , 64 bit, i5, intel HD graphics onboard, Nvidia GT 525. I didnt install Nvidia later, it was always installed and ready from the moment I turned on the computer first time. How I determined that the cards werent switched when I am playing the game: with the windows key I exited from the game, then looked at screen resolutions menu, still saw intel, also the game was still with black bars.I know intel GPU should enough for Diablo 2 but I am interested in this answer for further games, I dont always play Diablo, what if I install an up to date game for example?Then Intel will not be sufficient.I would like to learn the switch option.

    Read the article

  • HP Laserjet 1320 drivers for Windows 7

    - by RedGrittyBrick
    Background I replaced som XP computers with Win 7 computers. We have a HP Laserjet 1320dn printer attached to the LAN. The XP computers could print to it from Word in duplex mode without any issues. Problem The Win 7 computers downloaded a full set of drivers including a "HP Laserjet 1320 PCL5" driver. However using this, Word's page footers cause extra pages to be printed with just a letter or so at a time from the footer. Some other apps have similar issues. I also have an ancient Laserjet 1200 attached to the LAN via a Jetdirect box and that works just fine. So I don't think the problem is with Word (the computers came with Word 2010 starter). What I tried I wrangled the control-panel new printer dialogue into using a "HP Laserjet 2100 PS" driver for the HP Laserjet 1320dn. Now my Word documents print as they should. However I don't have a duplex option on the print dialogue. I'd really like to be able to use duplex printing. Question Windows used to have a Universal Postscript driver that read a Postscript Printer Definition (PPD) file to make the printer's features available (tray choice, duplex etc). I can't see any way to do this in Windows 7. Is there a way? Is there any other way I can get Windows 7 Home Premium 64-bit to print properly to a HP Laserjet 1320dn and have access to all it's major features? Addendum: My page in Word looks like this: +--------------------+ | aa bb cc | | | | lorem ipsum dolor | | | ... | | | pp qq rr | +--------------------+ The headers and footers were inserted using Insert - header - blank (3 column). When printed I get 1 page with aa, bb, cc, pp in correct position (no other text) 1 blank page 1 page with qq in correct position (no other text) 1 blank page 1 page with rr in correct position (no other text) 1 blank page 1 page with lorem ipsum dolor in correct position (no headers or footers) If I use a Laserjet 2100 driver I get 1 correct page.

    Read the article

  • Using Different Networks with Different Proxy Servers on Windows 7

    - by John
    Hi, I have a laptop running Windows 7 Professional. There are two wireless networks I connect to every day: Home: no proxy server Work: proxy server with authentication On my iPad and iPhone, I've got two WiFi network profiles (one for home, one for work). The work one has the proxy server settings specified. The home one has no proxy specified. It all works great and I don't need to go changing settings around whenever I move from home to work or vice versa. On my laptop, however, I can't seem to get this going. I can certainly connect to both networks, but when I'm at work I have to go and change the proxy settings (in Internet Options) to be able to use the network. When I'm at home, I have to then go and turn them off. It's a small thing, but considering this is something I have to do every day, it's a bit annoying. Is there any way I can make Windows automatically switch proxy settings on or off based on the network I'm connected to? Thanks, John

    Read the article

  • How to remove NTFS system files from a previous Vista installation

    - by Boldewyn
    I'm trying to shrink my system partition under Win Vista. It's all fine, except that in front of the last 300MB of the volume sits a single file, that cannot be moved by defrag or other means from its position. It's called C:\$Extend\$UsnJrnl:$J, and my assumtion is, that it is left from a previous installation of Vista, when I re-set up the system. Now, googling for this kind of files brings interesting results, but no solution to my problem: Files left on the disk can become ownerless in a new setup of Windows and inaccessible (even for administrators). To be able to access them again, I found the tip to use takeown to re-assign them to the Admin group (or anyone else). Works like a charm for normal files, but not for the C:\$Extend stuff. The C:\$Extend folder is a system folder of the NTFS file system, where the journal is stored (especially in a file called $UsnJrnl:$Data, whose name is surprisingly close to mine). You can delete the journal with fsutil usn /delete C:, however, this doesn't work from within the booted system (as I found out trying). Also, I'm not quite sure of the side effects. You can't move the NTFS own files with standard defrag tools. The same holds, by the way, for not accessible files. Every bit of knowledge out there is targeted to either not accessible files or the $Extend NTFS stuff, but noone addresses my problem involving both, an inaccessible system file. Question: How can I remove this file, or at least how can I move it on the disk?

    Read the article

  • Using Samba to share a folder from a Linux guest with a Windows host in VirtualBox

    - by AmV
    I would like to share a folder from a Linux Guest with a Windows host (with read and write access if possible) in VirtualBox. I read in these two links: here and here that it's possible to do this using Samba, but I am a little bit lost and I need more information on how to proceed. So far, I managed to set up two network adapters (one NAT and one host-only) and install Samba on the Linux guest, but now I have the following questions: What do I need to type in samba.conf to share a folder from the Linux guest? (the tutorial provided in one of the links above only explains how to share home directories) Are there any Samba commands that need to be executed on the guest to enable sharing? How do I make sure that these folders are only available to the host OS and not on the Internet? Once the Linux guest is setup, how do I access each of the individual shared folders from the Windows host? I read that I need to mount a drive on Windows to do this, but do I use Samba logins, or Linux logins, also do I use localhost? or do I need to set up an IP for this? Thanks!

    Read the article

  • Completed downloads freeze Windows

    - by Ben Hooper
    The Issue Shortly after a file download via Google Chrome for Windows completes, the download will get stuck on "0 seconds left" and all other programs (except Google Chrome, for some reason, but browsing will not work) completely freezes into Windows' infamous "Not Responding" state, affecting Explorer particularly badly. Eventually, the programs will recover themselves but they will recover significantly faster if you cancel the file download, relative to how quickly you react. Performing the exact same operation immediately after cancelling the download usually works without issue. This issue occurs when with any file type (.ZIP, .MSI, .MSG, .PNG, .URL, etc) of any size from any source (Dropbox, SourceForge, Imgur, even tiny and locally-generated BLObs created by my own Chrome extension, etc) to any location.   Potential Causes As this issue is so inconsistent, I haven't been able to prove whether the issue is Chrome-specific or being caused by my system or my Chrome configuration but it's happening on both my work and home PCs. I originally suspected that this issue was being caused by security software scanning completed downloads for threats but I'm not as confident in that theory anymore as the issue persisted even after changing my security software from ESET NOD32 and Malwarebytes Anti-Malware Pro to ESET Endpoint to Microsoft Security Essentials.   System Information (of both PCs) Windows version: 7 Service Pack 1 64-bit Google Chrome version: 30.0.1599.101 (but has been happening for a long time)   Screenshots

    Read the article

  • Load balancing with puppet

    - by Gonçalo Queirós
    Hi there. Im trying to setup a loadbalancing system. My load balancer (nginx) has a conf file where i should list all IP's of the upstream servers. I could put the IP's on the conf manually, but this ways i would need to change the conf file every time i add/remove an upstream server. For now i came up with two different ideas, but i don't like much of neither: 1 - Have every upstream machine to use Exported Resources to create a file with it's IP..Then the load balancer server will have an "include conf_directory/*", and load all the files created by the upstrem servers. Since the load balancer is using nginx this can be done, but if i wan't latter on to configure something that doesn't have the "include" on the conf files, this solution will not work. 2 - If the config doesn't support the "include" command, then we could have again, every upstream server use the Exported Resources to create a filw with its IP, and latter on, the load balancer execute a command that would pick every file and generate the config Both versions addopt the same techinque, the difference is that version 2 is used when the server (that needs to have a conf generated) doesn't recognize a command like "include" inside its own conf. Now, my question is, is there any way to do this in a different form? I suspect that there is, since puppet is made to manage multiple servers, it seems a bit strange not have a easy way to configure load balancers.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Ubuntu network card problem.

    - by Steve Greene
    Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, it works fine under Windows Vista. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System.

    Read the article

  • Dell OMCI: Wacky values for Temperature and etc? (Win7x64)

    - by Yablargo
    Hey All. I am running a Dell Precision R5400 Workstation with dell OMCI installed. I am using it to test polling various data over WMI for our monitoring across the enterprise. I'm getting some weird results. perhaps someone can help point me in the direction of some clarification? Posted is the results of my DCIM\SYSMAN\DCIM_NumericSensor probe for sensor type 2(temp sensor) Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. ----------------------------------- DCIM_NumericSensor instance ----------------------------------- Accuracy: AccuracyUnits: AdditionalAvailability: Availability: AvailableRequestedStates: BaseUnits: 2 Caption: CommunicationStatus: CreationClassName: DCIM_NumericSensor CurrentReading: -214748365 CurrentState: Unknown Description: DetailedStatus: DeviceID: Root/MainSystemChassis/TemperatureObj ElementName: Temperature Sensor:CPU0 EnabledDefault: 2 EnabledState: 2 EnabledThresholds: ErrorCleared: ErrorDescription: HealthState: 5 Hysteresis: IdentifyingDescriptions: InstallDate: IsLinear: LastErrorCode: LocationIndicator: LowerThresholdCritical: LowerThresholdFatal: LowerThresholdNonCritical: MaxQuiesceTime: MaxReadable: MinReadable: Name: NominalReading: NormalMax: NormalMin: OperatingStatus: OperationalStatus: 2 OtherEnabledState: OtherIdentifyingInfo: OtherSensorTypeDescription: PollingInterval: PossibleStates: Unknown,Normal,Fatal,Lower Non-Critical,Upper Non-Critical,Lower Critical,Upper Critical PowerManagementCapabilities: PowerManagementSupported: PowerOnHours: PrimaryStatus: ProgrammaticAccuracy: RateUnits: 0 RequestedState: 12 Resolution: SensorType: 2 SettableThresholds: Status: StatusDescriptions: StatusInfo: SupportedThresholds: SystemCreationClassName: DCIM_ComputerSystem SystemName: dt:5Q7BKK1 TimeOfLastStateChange: Tolerance: TotalPowerOnHours: TransitioningToState: 12 UnitModifier: 0 UpperThresholdCritical: UpperThresholdFatal: UpperThresholdNonCritical: ValueFormulation: 2 I'm not really sure whats going on, but note the CurrentReading: -214748365. I have reinstalled OMCI a few times, installed the OMCI 7x compatability and same thing I consistently get that error. It almost looks like its a issue between 32/64 bit value or something? Do I have to convert it to a float ? :)

    Read the article

  • Should I embed the sRGB color profile in JPEG files?

    - by basic6
    I have a large (growing) collection of scanned images. They are TIFF files, mostly 48 bit with the Adobe RGB color space. This color profile is integrated in the files. When such a file is opened in IrfanView (with plugins), it says (Image - Information) Adobe RGB 1998. "Normal images", like the JPG files from a digital camera, do not (necessarily) have a color profile integrated in the file. I understand that it's necessary to include the Adobe RGB profile in an image file which uses the Adobe RGB space, so the color values can be interpreted correctly. Here's a test image with a completely different color profile, programs that ignore the included profile (like MSIE8 or Gwenview) will render it as sRGB (?): I'm planning to convert my TIF files to JPG, so I'm wondering if there's anything wrong with using IrfanView that would save them as sRGB without embedding the sRGB profile. I've heard that images should always be saved with the color profile included. Since every image seems to be interpreted as sRGB by default (by software without color management), I don't understand why the sRGB profile should be included?

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Outlook / Gmail 'too many simultaneous connections' error

    - by sam
    I'm just setting up Outlook for Mac, and I'm trying to add a Google Apps application for business email (Gmail). I've set it up correctly (same details worked in Mac mail). But I keep getting two errors, either or just a error asking for the username and password again. Just to confirm the user name and password are correct, although when I go into menu command Tools - Account and look in the password field for that account it's blank. But if I just click cancel on the popup asking for my username password it just continues to get mail in the background for about 30 seconds, before again asking again for the password, or showing the above error which I can click 'yes' to and again it will get the mail. But after 30 seconds it does the same thing. I've got two other accounts set up fine, one a horde account (hosted webmail using POP3) and the other a iCloud .me account running on IMAP. What might be causing this and how I can remedy it? A bit more background: the machine is a MacBook Pro running Mac OS X v10.7 (Lion). Update 2013-11-02 I've updated Outlook to SP3, but I still get the same error.

    Read the article

  • Graphics Artifacts/ Texture Flickering

    - by Cerin
    Hey I been having some problems with artifacts in games. Sometimes textures flicker. Artifacts of various shapes and sizes show up usually after a couple games of dota 2. I built my computer almost exactly one month ago and it has been doing this pretty much from the start except before the artifacts I believe just flashed on screen fast enough to where I couldn't tell what it was but I still noticed. In dota I've seen green triangular artifacts among other things. I've tried running Furmark for a while but even though it pushes the gpu much harder than dota 2, there are still no artifacts. It maxes in furmark at about 60C and running every game I've tried on it at 40C. CPU and system temp don't usually get higher than 40C either. These are my system specs: Gigabyte Z68 Intel Motherboard 16 GB Gskill Ripjaws SDRAM DDR3 Sapphire Radeon HD 7770 GHz edition Intel Core i5-2500k (with built in gpu) Corsair 750 Watt PSU windows 7-64 bit I have the latest drivers for everything. What should I do about this? Try to RMA my graphics card? Are there other things that could be causing this?

    Read the article

  • Sudden problems with iptables not running

    - by Fourjays
    I've got a sudden issue with iptables not running on my CentOS 5.8/DirectAdmin XenVPS. All I have done today is install PHP APC and run an update (although I admittedly didn't pay much attention today - I usually do). Iptables has been running fairly smoothly since I installed it over 6 months ago. Basically when I try to run iptables -L it tells me: iptables v1.3.5: can't initialize iptables table `filter': iptables who? (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I've looked around and tried a few things and it appears that maybe my kernel doesn't have the modules loaded? I've been reading this and tried the two commands they suggest to no avail. Except there does appear to be a mismatch on one bit of output: -bash-3.2# cd /lib/modules -bash-3.2# ls 2.6.18-194.32.1.el5xen 2.6.18-238.5.1.el5xen 2.6.18-274.7.1.el5xen 2.6.39.1-cs-domU 2.6.18-238.12.1.el5xen 2.6.18-238.9.1.el5xen 2.6.37.2-cs-domU 3.0.1-cs-domU -bash-3.2# depmod -a WARNING: Couldn't open directory /lib/modules/2.6.18-274.18.1.el5xen: No such file or directory FATAL: Could not open /lib/modules/2.6.18-274.18.1.el5xen/modules.dep.temp for writing: No such file or directory Does this mean the versions are out of sync? If so, what are my next steps to getting this fixed? As you can probably tell I am still learning how to manage my server so please be very clear in all advice. Many thanks :)

    Read the article

  • Sending mail results in "Sender address rejected: Domain not found"

    - by user1281413
    The setup: WHM/CPanel CentOS 5 server running Exim and Courier for mail services, and BIND for domain name services. I recently moved servers. The old server was running a HIGHLY similar configuration, and all accounts were ported via WHM. However, the server is unable to send, and sometimes receive email. Errors I am seeing (when I do get an error mail back) state: 450 4.1.8 : Sender address rejected: Domain not found Edit for clarity: this is the error response from remote mail servers. Numerous independent mail servers come back with the same error. (Email address is merely one valid example) My first instinct of course was to check the domain records. However, k-t.org appears to have a valid record (including an MX record), even after running it through domain checks on a completely different server elsewhere and online. Note that the issue appears to happen with all the domains hosted on the server, not just k-t.org I have also ensured that a PTR was created. My Googling has only lead me to people who had fairly basic DNS mistakes, but either I'm blind/dumb (possible, DNS is not my strong suite), or it's something that is a bit more archaic. I've run out of ideas, and I can't seem to find anything that could explain why servers are unable to resolve the domains. There doesn't seem to be anything missing or incorrect.

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • Overscan in Catalyst Control Center

    - by alex
    I've connected my PC to my TV via HDMI (I've also tried using DVI). Whatever the resolution (anything from 800 x 600 to 1080p), the image displayed on the TV does not fill the entire screen surface; there's a black border on the edge of the image. This happens in both Vista and Windows 7. After a bit of searching I came to the conclusion it's got something to do with overscan done by the graphics card (ATI 3200HD). It's definitely not the TV because on my old PC there were no such problems. I've searched for the option in Catalyst Control Center 9.9 but it's not available for the TV; if I connect a normal LCD display, the option is there, but only for the LCD screen. If I choose Configure for my TV, where the overscan option should be, it takes me to the Welcome tab in CCC. How can I make the image fill the entire screen? How can I enable/disable overscan for my TV?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • Unable to mount root fs over NFS [on hold]

    - by johnmadrak
    I am attempting to set up a Raspberry Pi running Pidora to boot from an NFS share. My configuration in cmdline.txt is: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs nfsroot=<serverip>:/fake/path,nfsvers=3,rw,nolock nfsrootdebug ip=dhcp elevator=deadline rootwait On the Pi, the output I see is: IP-Config: Got DHCP answer from <router>, my address is <clientip> IP-Config: Complete: device=eth0, hwaddr=<macaddress>, ipaddr=<clientip>, mask=255.255.255.0, gw=<routerip> host=<clientip>, domain=, nis-domain=(none) bootserver=<routerip>, rootserver=<serverip>, rootpath= nameserver0=<routerip> (It pauses for a bit here) VFS: Unable to mount root fs via NFS, trying floppy VFS: Cannot open root device "nfs" or unknown-block(2,0); error -6 Please append a correct "root=" boot option; here are the available partitions: ..... On the NFS Server (an OpenVZ Container), the output I see in the /var/log/messages is: Aug 22 23:24:01 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:783 for /fake/path (/fake/path) Aug 22 23:24:38 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:741 for /fake/path (/fake/path) Aug 22 23:25:25 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:752 for /fake/path (/fake/path) Aug 22 23:26:12 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:876 for /fake/path (/fake/path) To test, I've made sure I can mount (non-root) from both the Pi and another machine and it worked. Does anyone have an idea on what could be wrong or how to narrow it down? Thank you in advanced for your help.

    Read the article

  • Why does the screen resolution of 1440x900 suddenly disappears from Intel GMA Control Panel?

    - by GeneQ
    I'm using a Vostro 1200 laptop with the Mobile Intel(R) 965 Express Chipset powering its graphics and running Vista 32-bit SP2 . I've been using the Vostro with a Dell SE198WFP LCD Monitor as the external display since day one for about two years without any problems. Recently, I plugged the Vostro into a couple of other monitors. The problem is, now the native resolution for my main monitor's (the SE198WFP) resolution of 1440x900 @ 60 Hz is no longer available. (See below) I've tried everything from uninstalling and reinstalling the Intel drivers as well as the monitor drivers to no avail. I've Goggled that this problem and it appears that this has happened to other people but all the answers involve people giving up in frustration or reinstalling; both terrible outcomes. Has anybody ever figured why this happens and have a good solution? Thanks. UPDATE: This dude has a complicated solution, which I haven't tried yet. His explanations for the problem was After an exausting search for an answer to the matter of why my brand new 19? widescreen monitor’s native resolution (1440×900) was unavailible (sic) in the display properties, I finally stumbled upon an article a person posted on Intel’s forums that basically explained what shannanigans Intel had been up to with their GMA 950 line of onboard graphic solutions. Not very comforting.

    Read the article

  • How to make gpg2 flush the stream?

    - by Vi
    I want to get some slowly flowing data saved in encrypted form at the device which can be turned off abruptly. But gpg2 seems to not to flush it's output frequently and I get broken files when I try to read such truncated file. vi@vi-notebook:~$ cat asdkfgmafl asdkfgmafl ggggg ggggg 2342 2342 cat behaves normally. I see the output right after input. vi@vi-notebook:~$ gpg2 -er _Vi --batch ?pE??x...(more binary data here)....???-??.... asdfsadf 22223 sdfsdfasf Still no data... Still no output... ^C gpg: signal Interrupt caught ... exiting vi@vi-notebook:~$ gpg2 -er _Vi --batch /tmp/qqq skdmfasldf gkvmdfwwerwer zfzdfdsfl ^\ gpg: signal Quit caught ... exiting Quit vi@vi-notebook:~$ gpg2 " 2048-bit ELG key, ID 78F446CA, created 2008-01-06 (main key ID 1735A052) gpg: [don't know]: 1st length byte missing vi@vi-notebook:~$ # Where is my "skdmfasldf" How to make gpg2 to handle such case? I want it to put enough output to reconstruct each incoming chunk of input. (Also fsyncing after each output can be benefitial as an additional option). Should I use other tool (I need pubkey encryption).

    Read the article

  • Methods to transfer files from Windows server to linux server

    - by Raze2dust
    Hi, I need to transfer webserver-log-like-files containing periodically from windows production servers in the US to linux servers here in India. The files are ~4 MB in size each and I get about 1 file per minute. I can take about 5 mins lag between the files getting written in windows and them being available in the linux machines. I am a bit confused between the various options here as I am quite inexperienced in such design: I am thinking of writing a service in C#.NET which will periodically archive, compress and send them over to the linux machines. These files are pretty compressible. WinRAR can convert 32 MB of these files into a 1.2 MB archive. So that should solve the network transfer speed issue. But then how exactly do I transfer files to linux? I could mount linux drive on windows server using samba, or should I create an ftp server, or send the file serialized as a POST request. Which one would be good? Also, I have to minimize the load on the windows server. Mount the windows drive on linux instead. I could use the mount command or I could use samba here (What are the pros and cons of these two?). I can then write the compressing and copying part in linux itself. I don't trust the internet connection to be very stable, so there should be a good retry mechanism and failure protection too. What are the potential gotchas in these situations, and other points that I must be worried about? Thanks, Hari

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >