Search Results

Search found 21717 results on 869 pages for 'setup versions'.

Page 485/869 | < Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >

  • Unable to connect to sites using IIS7 Manager

    - by Phil.Wheeler
    I'm a developer who has been assigned the task of managing and configuring a new IIS7 instance on a remote server. My domain account has been added as to the local Administrators group on the box, but IIS7 has been configured to accept connections only from accounts with Windows credentials. I've added my domain account to the IIS Manager Permissions for one of my sites, but I'm still unable to connect to either that site, the IIS instance or the server in general from my local machine. There's obviously a missing element to the configuration of this setup but I don't know where to start looking. The event logs on the IIS box show audit failures for my account when trying to connect remote via the IIS7 Manager tool on my local machine. Suggestions gratefully received.

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Export SharePoint Wiki to PDF from the Command Line

    - by Wyatt Barnett
    We use a SharePoint wiki* at the office to serve as a knowledgebase for our IT operations. Recently we went through a disaster recovery exercise where we realized we had a key hole in our plans: how do you restore the services if your instruction manual is down because some services are offline? Anyhow, we did realize that the wiki angle was definitely something we wanted to keep, but rather that we should explore a way to create offline backups of the wiki which could be easily read using common software we should be able to setup without any knowledge from the wiki. So, does anyone know of a good utility that can take a SharePoint wiki and dump it to PDF/Word/RTF/[INSERT HUMAN FRIENDLY FORMAT] easily from the command line? *-Yes, there are better solutions out there. But this was easy and used existing infrastructure and generally does what we need it to do.

    Read the article

  • Why should I use Zend_Application?

    - by Billy ONeal
    I've been working on a Zend Framework application which currently does a bunch of things through Zend Application and a few resource plugins written for it. However, looking at this codebase now, it seems to me that using Zend_Application just makes things more complicated; and a plain, more "traditional" bootstrap file would do a better job of being transparent. This is even more the case because the individual components of Zend -- Zend_Controller, Zend_Navigation, etc. -- don't reference Zend_Application at all. Therefore they do things like "Well just call setRoute and be on your way," and the user is left scratching their head as to how to implement that in terms of the application.ini configuration file. This is not to say that one can't figure out what's going on by doing spelunking through the ZF source code. My problem with that approach is that it's to easy to depend on something that's an implementation detail, rather than a contract, and that all it seems to do is add an extra layer of indirection that one must wade through to understand an application. I look at pre ZF 1.8 example code, before Zend_Application existed, and everywhere I see plain bootstrap files that setup the MVC framework and get on their way. The code is clear and easy to understand, even if it is a bit repetitive. I like the DRY concept that Application gets you, but particularly when I'm assuming first people looking at the app's code aren't really familiar with Zend at all, I'm considering blowing away any dependence I have on Zend_Application and returning to a traditional bootstrap file. Now, my concern here is that I don't have much experience doing this, and I don't want to get rid of Zend_Application if it does something particularly important of which I am unaware, or something of that nature. Is there a really good reason I should keep it around?

    Read the article

  • Windows 7 CRC Error When Installing Fallout 3 [closed]

    - by c00lryguy
    Earlier today, I installed Fallout 3 on Windows XP perfectly fine. Then about 2 hours ago I installed Windows 7 and I would like to install Fallout 3. But, when I try to install Fallout 3 on Win 7, I get an error while in the middle of the install: CRC Error: The File C:\Program Files\Bethesda Softworks\Fallout 3\Data\Video\B03.bik doesn't match the file in the setup's.cab file I forget the filename but it is the same each time I install. The disk literally went from the DVD-Rom to the case after the first install and straight from the case to the DVD-Rom. It's in perfect condition. My DVD-Rom is only about 2 months old and I've never had any problems with it. I don't understand what's going on. My user that I'm installing the game with is set as Administrator, as well.

    Read the article

  • haproxy not passing X_FORWARD_FOR on HTTP POST

    - by Mark L
    Hello, I've setup HAProxy with the option forwardfor option so it'll pass on the user's IP to PHP via $_SERVER[ "HTTP_X_FORWARDED_FOR" ]. If the page request isn't a POST it's populated fine but if it is then it won't be populated. Any ideas where I've gone wrong? Thanks everyone! My whole HAProxy conf file for reference: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 4096 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm :80 mode http balance roundrobin option forwardfor server webA 192.168.240.4 weight 1 maxconn 2048 check server webB 192.168.240.3 weight 1 maxconn 2048 check listen smtp :25 mode tcp option tcplog balance roundrobin server smtp 192.168.240.4:25 check

    Read the article

  • The problems with Avoiding Smurf Naming classes with namespaces

    - by Daniel Koverman
    I pulled the term smurf naming from here (number 21). To save anyone not familiar the trouble, Smurf naming is the act of prefixing a bunch of related classes, variables, etc with a common prefix so you end up with "a SmurfAccountView passes a SmurfAccountDTO to the SmurfAccountController", etc. The solution I've generally heard to this is to make a smurf namespace and drop the smurf prefixes. This has generally served me well, but I'm running into two problems. I'm working with a library with a Configuration class. It could have been called WartmongerConfiguration but it's in the Wartmonger namespace, so it's just called Configuration. I likewise have a Configuration class which could be called SmurfConfiguration, but it is in the Smurf namespace so that would be redundant. There are places in my code where Smurf.Configuration appears alongside Wartmonger.Configuration and typing out fully qualified names is clunky and makes the code less readable. It would be nicer to deal with a SmurfConfiguration and (if it was my code and not a library) WartmongerConfiguration. I have a class called Service in my Smurf namespace which could have been called SmurfService. Service is a facade on top of a complex Smurf library which runs Smurf jobs. SmurfService seems like a better name because Service without the Smurf prefix is so incredibly generic. I can accept that SmurfService was already a generic, useless name and taking away smurf merely made this more apparent. But it could have been named Runner, Launcher, etc and it would still "feel better" to me as SmurfLauncher because I don't know what a Launcher does, but I know what a SmurfLauncher does. You could argue that what a Smurf.Launcher does should be just as apparent as a Smurf.SmurfLauncher, but I could see `Smurf.Launcher being some kind of class related to setup rather than a class that launches smurfs. If there is an open and shut way to deal with either of these that would be great. If not, what are some common practices to mitigate their annoyance?

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • Windows 7 - cancel mirror synchronisation

    - by Chris W
    I've got basic disk OS managed disk mirroring setup in Windows 7 for a couple of volumes. After a power failure the mirrors are currently resynching. These are only small volumes of data but the sync has not completed after more than 24 hours. Is there any way to stop this as it's driving me nuts? I need to get the machine back to a usable state to get some work done but it's a bit of a dog whilst this synch is going on. I've tried removing the mirrors but it won't let me do that whilst the re-sync is in progress.

    Read the article

  • SQL SERVER – Fix: Error: 8117: Operand data type bit is invalid for sum operator

    - by pinaldave
    Here is the very interesting error I received from a reader. He has very interesting question. He attempted to use BIT filed in the SUM aggregation function and he got following error. He went ahead with various different datatype (i.e. INT, TINYINT etc) and he was able to do the SUM but with BIT he faced the problem. Error Received: Msg 8117, Level 16, State 1, Line 1 Operand data type bit is invalid for sum operator. Reproduction of the error: Set up the environment USE tempdb GO -- Preparing Sample Data CREATE TABLE TestTable (ID INT, Flag BIT) GO INSERT INTO TestTable (ID, Flag) SELECT 1, 0 UNION ALL SELECT 2, 1 UNION ALL SELECT 3, 0 UNION ALL SELECT 4, 1 GO SELECT * FROM TestTable GO Following script will work fine: -- This will work fine SELECT SUM(ID) FROM TestTable GO However following generate error: -- This will generate error SELECT SUM(Flag) FROM TestTable GO The workaround is to convert or cast the BIT to INT: -- Workaround of error SELECT SUM(CONVERT(INT, Flag)) FROM TestTable GO Clean up the setup -- Clean up DROP TABLE TestTable GO Workaround: As mentioned in above script the workaround is to covert the bit datatype to another friendly data types like INT, TINYINT etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Cannot boot from windows 7 DVD

    - by webnoob
    Hi All, I have just purchased windows 7 64bit. I entered the disk in the drive and it told me I couldn't upgrade as I am using XP so I have tried to boot from CD instead but it doesn't work. It seems to look at the disk for a few seconds and then ends up at a screen saying windows failed to start and then I hit enter and it loads windows XP again. Does anyone know what could cause this? Here is my system info: Time of this report: 4/15/2010, 18:11:39 Machine name: MYCOMP Operating System: Windows XP Professional (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_gdr.100216-1514) Language: English (Regional Setting: English) System Manufacturer: Dell Inc. System Model: OptiPlex 755 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A09 Processor: Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz (2 CPUs) Memory: 3316MB RAM Page File: 568MB used, 4631MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode

    Read the article

  • Cannot boot from windows 7 DVD

    - by webnoob
    Hi All, I have just purchased windows 7 64bit. I entered the disk in the drive and it told me I couldn't upgrade as I am using XP so I have tried to boot from CD instead but it doesn't work. It seems to look at the disk for a few seconds and then ends up at a screen saying windows failed to start and then I hit enter and it loads windows XP again. Does anyone know what could cause this? Here is my system info: Time of this report: 4/15/2010, 18:11:39 Machine name: MYCOMP Operating System: Windows XP Professional (5.1, Build 2600) Service Pack 3 (2600.xpsp_sp3_gdr.100216-1514) Language: English (Regional Setting: English) System Manufacturer: Dell Inc. System Model: OptiPlex 755 BIOS: Phoenix ROM BIOS PLUS Version 1.10 A09 Processor: Intel(R) Core(TM)2 Duo CPU E6550 @ 2.33GHz (2 CPUs) Memory: 3316MB RAM Page File: 568MB used, 4631MB available Windows Dir: C:\WINDOWS DirectX Version: DirectX 9.0c (4.09.0000.0904) DX Setup Parameters: Not found DxDiag Version: 5.03.2600.5512 32bit Unicode

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • Are there any home/soho NAS devices that will backup/sync to the cloud?

    - by 3rdparty
    Looking for a home office (SOHO) market (priced) network hard drive (NAS) that will sync some or all of its content to a cloud-based backup service. The only option I've been able to find so far is NetGear's [ReadyNAS Vault][1] however from what I've read it's not as secure as it could be, and the service is quite expensive ($200/yr for 50GB of cloud storage) - it's 'powered' by ElephantDrive Ideally would love to see something like Wuala integrated into a Lacie Network HDD - conveniently, I suspect this is in the works as Lacie recently acquired Wuala, however nothing has come of it yet. I know there are options to use rsync with a customizable NAS (such as the very versatile and hackable D-Link DNS-323, but the easier this is to setup and maintain, the better. Thanks! ps. I had many links posted within this question, but was limited to posting with only one due to anti-spam restrictions - gotta get my 'reputation' higher!

    Read the article

  • Security question pertaining web application deployment

    - by orokusaki
    I am about to deploy a web application (in a couple months) with the following set-up (perhaps anyways): Ubuntu Lucid Lynx with: IP Tables firewall (white-list style with only 3 ports open) Custom SSH port (like 31847 or something) No "root" SSH access Long, random username (not just "admin" or something) with a long password (65 chars) PostgreSQL which only listens to localhost 256 bit SSL Cert Reverse proxy from NGINX to my application server (UWSGI) Assume that my colo is secure (Physical access isn't my concern for the time being) Application-level security (SQL injection, XSS, Directory Traversal, CSRF, etc) Perhaps IP masquerading (but I don't really understand this yet) Does this sound like a secure setup? I hear about people's web apps getting hacked all the time, and part of me thinks, "maybe they're just neglecting something", but the other part of me thinks, "maybe there's nothing you can do to protect your server, and those things are just measures to make it a little harder for script kiddies to get in". If I told you all of this, gave you my IP address, and told you what ports were available, would it be possible for you to get in (assuming you have a penetration testing tool), or is this really protected well.

    Read the article

  • dual boot preinstalled windows 8 laptop with windows 7

    - by sarathi
    I am having a hard disk in gpt partition style and Windows 8 is pre-installed in uefi mode. While trying to dual boot using usb bootable disk(using rufus,gpt partition style,fat32),it displays "Windows is loading files", then while displaying "Starting Windows" it hangs. So, I tried to install it inside Windows using setup.exe. Everything was running fine, but when it gets self restarted again, it got stuck at "Starting Windows". When restarted in Windows 8, it showed a htm file stating that "Windows cannot be installed on a computer using battery power. If the battery runs out of power during the installation, you might lose data. To continue the installation, plug in the computer's power adapter." I am sure that power adapter is connected. I Googled a lot on this, but I didn't get solution.

    Read the article

  • Ruby, Rails & MySQL parity between Mac Client (10.6) & XServe (10.5)

    - by Meltemi
    We're setting up a RoR setup with Development on Mac OS X Client (10.6.3) and then using a Mac OS X Server (10.5.8) for testing and eventually deployment. I'd like to get as many systems in sync on these machines as possible. Wondering if there are any pitfalls. I seem to understand what's necessary under Client but Server has some hardwired stuff that I want to make sure doesn't break...or is updated correctly. Currently installed on both machines we have: OS X Client (10.6.3): Ruby 1.8.7 Rails 2.3.5 MySQL (not installed yet) OS X Server (10.5.8): Ruby 1.8.6 Rails 2.3.5 MySQL Ver 14.12 Distrib 5.0.82 Any suggestions...Ideally from someone who's done this on Leopard Server as well but I'll listen to general tips & proceedures

    Read the article

  • Is anyone else using OpenBSD as a router in the enterprise? What hardware are you running it on?

    - by Kamil Kisiel
    We have an OpenBSD router at each of our locations, currently running on generic "homebrew" PC hardware in a 4U server case. Due to reliability concerns and space considerations we're looking at upgrading them to some proper server-grade hardware with support etc. These boxes serve as the routers, gateways, and firewalls at each site. At this point we're quite familiar with OpenBSD and Pf, so hesitant at moving away from the system to something else such as dedicated Cisco hardware. I'm currently thinking of moving the systems to some HP DL-series 1U machines (model yet to be determined). I'm curious to hear if other people use a setup like this in their business, or have migrated to or away from one.

    Read the article

  • Git Project Dependencies on GitHub

    - by VirtuosiMedia
    I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.

    Read the article

  • DDWRT or similar as repeater in a network.

    - by Quantumplation
    I have a friend with sever connection issues due to her wireless router being on the bottom floor of her house, and the computer being a story or two away. I have several old Linksys routers lying about, one of which is currently running DDWRT for my network. Would it be a good idea (effective) to configure one of these routers as a wireless bridge of some kind in an intermediary floor to improve her connection? Is there any specific configuration beyond the standard DDWRT setup that I would need to do? Thanks for your help. =)

    Read the article

  • Boot disc isnt loading on MY system

    - by acidzombie24
    I am trying to update the firmware on my harddisk. I grabbed seagates windows setup tool which didnt boot into the app to update the firmware so I burned their iso image. Their ISO also doesnt boot and i vaguely remember something about windows not recognize my disc because of an EFI thing. It probably has nothing to do with it. Anyways, how do I boot into the disc? I tried going into advance options to boot directly to the disc and i get a blank screen. I can use ctrl+alt+del which reboot the system but other then that its blank and doesnt seem to load anything on the disc. The disc was a 7mb iso burnt using windows 7 built in iso burner (it suggest using it on seagates site). I have no idea what to do. Do any of you guys know what my problem may be? The media is DVD-R

    Read the article

  • Puppet nodes cant' find master, ec2 public versus internal ip addresses and hosts files

    - by Blankman
    If I setup my hosts files such that they reference all other ec2 nodes using the internal ip addresses, will this work or do I have to use the external ip addresses? Do I need to specify anything in my security group to get internal ip addresses to work? e.g. /etc/hosts ip-10-11-12-13.internal some_node_name If I do this, can I reference some_node_name anywhere in my scripts where I would have used the ip address previously? On my puppet agent servers, I have a reference to my puppet master like: public-ip-here puppet When I reboot my puppet agent's, syslog shows they couldn't find the master with the message: getaddinfo : name or service not known I did get it to work by updating /etc/default/puppet and I added to the options: --server=public-ip-here From what I read, puppet will by default try using 'puppet', and I set this in my hosts file so why wouldn't it be picking this up?

    Read the article

  • Problems installing LYNC on non-domain controler

    - by Trikks
    I have two servers in this set up. AD and EX, the domain is called mydomain.net The AD is a Windows 2008 Server (32 bit) with Active Directory installed AD only has it's own ip in the DNS-servers list AD.mydomain.net does resolve correctly in the dns EX is a Windows 2008 R2 that is connected to the mydomain.net-domain EX only DNS server is the ip of the ad.mydomain.net There are no firewalls running between the two servers When trying to install Lync 2010 on the EX server I get the following error "Not available :Failure occurred attempting to check the schema state.Please ensure Active Directory is reachable." I can control the AD from EX, also login to it and do successful checks like netdom query /domain:mydomain.net fsmo ...that resolves correctly I suspect there is something fundamentally wrong with my setup, maybe Lync need a 2k8 R2 ad?

    Read the article

  • IE9 Loses Some CSS After Particular Form Submit [migrated]

    - by Asherion
    The site I am editing has a search form. For the record, there are several other forms on the site, contact and the like. This is the only one with an issue. Upon submission of the form, SOME of the styling is lost in IE9 (possibly other versions of IE, haven't tested that yet). Primarily, the margins and colors set in html and body appear to have been lost. Menus, banner, text, etc all appear to retain styles. All styles are on one sheet, that are used here... Any helpful advice? Here is the contents of the search page and the php used to check for the form, if that helps, and the css that I think is lost. THE HTML: <div id="search"> <br /> <div style="float:right;font-size:.8em;"> <form name="form_sidesearch" action="search.html" method="post"> <input type="hidden" name="action" value="search" /> <input type="text" name="search_value" value="<?php echo $systems_primary->search_value ?>" /> <input type="submit" name="submit_search" value="Search Website" /> </form> <br /> </div> </div> <?php echo stripslashes($search_results); THE PHP: <?php // -- Begin Search -------------------------------------------------------------------------------------- if($_REQUEST["action"] === "search") { if(strlen($_REQUEST["pg"]) <= 0) { $_REQUEST["pg"] = 1; } $search_results = $systems_primary->search_website("index",urldecode($_REQUEST["search_value"]),"<div class=\"listing ui-corner-all\"><a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" class=\"listing_title\">{ENTRY_TITLE}</a>{ENTRY_CONTENT} <a href=\"{ENTRY_URL}\" title=\"{ENTRY_TITLE}\" style=\"font-size:.8em;\">...read more</a></div><br /><br />",345,"all",10,$_REQUEST["pg"]); } // -- End Search ---------------------------------------------------------------------------------------- ?> THE LOST CSS (could be more): html { background-color:#F6E6C8; font-size:16px; font:Helvetica; } body { width:1027px; margin:0 auto; background-color:#ffffff; font: arial, times new roman, sans-serif; }

    Read the article

  • Dock with dual external DVI monitors with Intel + Nvidia Optimus?

    - by Ryan
    I have a Dell Latitude E6420 laptop plugged into a docking station, and the dock has 2 monitors (connected with DVI). Also note that I've installed Ubuntu alongside (dual-boot) Windows 7. I can't get the dual monitors to work both on Ubuntu (either 11.10 or 12.04) and Windows 7. When I run lspci | grep VGA, I get: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: nVidia Corporation GF108 [Quadro NVS 4200M] (rev a1) If I then reboot and uncheck Optimus setting in the BIOS during reboot, I'm able to get the dual monitors to work in Ubuntu 12.04 (but I need to configure them every boot in Nvidia Settings). When I run lspci | grep VGA, I get: 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [Quadro NVS 4200M] (rev a1) But then if I reboot into Windows (leaving the Optimus unchecked), Windows can't detect external monitors, and the resolution is unacceptably low. I've seen on many forum posts that this particular graphics card setup causes lots of headaches. I haven't been able to resolve my problem yet. How can I use my external display on my laptop with intel and nvidia video cards? How to use external displays with Intel driver on a NVidia/Intel hybrid system nVidia Optimus , Unity 3D and Dual Monitors "Just use VGA instead of DVI" isn't an option because my dock has only 1 VGA port (and 2 DVI). Switching the BIOS setting on every reboot and then reconfiguring the display settings every time is tedious, time-consuming, and impractical. Do you know how to make this work smoothly? Thanks for your help! P.S. see also: http://superuser.com/questions/434358/dell-latitude-e6420-dual-boot-ubuntu-windows-7-optimus-graphics-problems

    Read the article

< Previous Page | 481 482 483 484 485 486 487 488 489 490 491 492  | Next Page >