Search Results

Search found 27238 results on 1090 pages for 'local variable'.

Page 351/1090 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • How can I remove duplicate icons for "launched" java programs in the launcher?

    - by Tim
    When launching java programs (like IntelliJ IDEA and Crashplan) in Natty's Unity launcher, duplicate icons are shown (see image). For IntelliJ I created the .desktop file, for Crashplan the .desktop file is supplied with the application. Is there something that can be changed in the .desktop files (or somewhere else) that can prevent this from occurring? I couldn't find a bug report for unity itself but programs like Gnome-Do/Docky have bug reports and had to make internal changes to their applications to prevent this. In this image the 1st icon is the one created from the .desktop file and the second icon is after launching it. Second icon disappears when closing the application. Custom IntelliJ .desktop file #!/usr/bin/env xdg-open [Desktop Entry] Version=1.0 Type=Application Terminal=false Icon[en_US]=/opt/idea/bin/idea128.png Name[en_US]=IntelliJ IDEA Exec=/opt/idea/bin/idea.sh Name=IntelliJ IDEA Icon=/opt/idea/bin/idea128.png StartupNotify=true Crashplan provide .desktop file [Desktop Entry] Version=1.0 Encoding=UTF-8 Name=CrashPlan Categories=; Comment=CrashPlan Desktop UI Comment[en_CA]=CrashPlan Desktop UI Exec=/usr/local/crashplan/bin/CrashPlanDesktop Icon=/usr/local/crashplan/skin/icon_app_64x64.png Hidden=false Terminal=false Type=Application GenericName[en_CA]=

    Read the article

  • Make Exchange 2007 use the correct SSL certificate

    - by Neil
    I have an SBS 2008 server contososerver.contosodomain.local which is externally accessible with the domain remote.contoso.com and an SSL certificate for the external domain which we installed using the SBS 2008 wizard. This works great for OWA because IIS serves the remote.contoso.com certificate. I also want to turn on external POP3/IMAP4/SMTP however when I try, I get served the internal certificate that SBS generated automatically (using its internal CA) which has the alternate names remote.contoso.com, contososerver.contosodomain.local and contososerver. I tried removing this certificate from Exchange but it won't let me because it needs it for its internal receive connector. So how do I tell Exchange 2007 to use the real certificate for external POP3/IMAP4/SMTP?

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • Set up Windows SBS dns server and vpn clients from brench office

    - by mn
    I have got some clients from bench office which connects vpn to main office. The Router from bench office assigned addresses from DHCP 192.168.1.0/255.255.255.0 and remote gateway assigned vpn ip addresses 10.10.20.0/255.255.255.0. There is a DNS server (Active Directory Win SBS 2000) and vpn client are registered with vpn address (10.10.20.0/255.255.255.0 and domain company.com.pl). I would like to register also primary bench subnet 192.168.1.0/255.255.255.0 with domain for example company.vpn.local I want to access vpn hosts for example: dev3.copmany.pkb.local and dev3.company.com using my Win SBS 2000 DNS server.

    Read the article

  • Using gitlab behind Apache proxy all urls are wrong

    - by Hippyjim
    I've set up Gitlab on Ubuntu 12.04 using the default package from https://about.gitlab.com/downloads/ As I had Apache installed already I have to run nginx on localhost:8888. The problem is, all images (such as avatars) are now served from that url, and all the checkout urls Gitlab gives are also the same - instead of using my domain name. If I change /etc/gitlab/gitlab.rb to use that url, then Gitlab stops working and gives a 503. Any ideas how I can tell Gitlab what URL to present to the world, even though it's really running on localhost? /etc/gitlab/gitlab.rb looks like: # Change the external_url to the address your users will type in their browser external_url 'http://my.local.domain' redis['port'] = 6379 postgresql['port'] = 2345 unicorn['port'] = 3456 and /opt/gitlab/embedded/conf/nginx.conf looks like: server { listen localhost:8888; server_name my.local.domain;

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • how to detect device type from connected device to router?

    - by molly
    i have a att router and there is an unknown device connected to my network. i cant seem to kick it off because of how att's router settings are created which is kind of dumb. i am able to see its local ip and mac address. i am on a mac with snow leopard. how can i get more information on the device with the information that i have? i want to see what kind of device it is, i have checked all deviced that are connected to the router and non seem to match the local ip that is connected. i have wpa encryption setup with a strong password.

    Read the article

  • SonicWall HA "gotchas"?

    - by Mark Henderson
    We're looking to move away from PFSense and CARP to a pair of SonicWall NSA 24001 configured in Active/Passive for High Availability. I've never dealt with SonicWall before, so is there anything I should know that their sales guy won't tell me? I'm aware that they had an issue with a lot of their devices shutting down connectivity because of a licensing fault, and they have an overtly complex management GUI (on the older devices at least), but are there any other big "gotchas" that I need to be aware of before committing a not insubstantial amount of money towards these devices? 1If you're outside the US, the SonicWall global sites suck balls. Use the US site for all your product research, and then use your local site when you're after local information.

    Read the article

  • SSH connection falling down

    - by kappa
    I've set up a connection with autossh that creates some tunnels at system startup, but if I try to connect, after successful login (with RSA key) connection fall down, here a trace: debug1: Authentication succeeded (publickey). debug1: Remote connections from LOCALHOST:5006 forwarded to local address localhost:22 debug1: Remote connections from LOCALHOST:6006 forwarded to local address localhost:80 debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: remote forward success for: listen 5006, connect localhost:22 debug1: remote forward success for: listen 6006, connect localhost:80 debug1: All remote forwarding requests processed debug1: Sending environment. debug1: Sending env LANG = it_IT.UTF-8 debug1: Sending env LC_CTYPE = en_US.UTF-8 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype [email protected] reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2400, received 2312 bytes, in 1.3 seconds Bytes per second: sent 1904.2, received 1834.4 debug1: Exit status 1 What can be the problem? All this stuff is managed by a script already running on another machine (creating reverse tunnels on the same machine but with different ports)

    Read the article

  • How to connect with MySQL server if it won't connect via the socket?

    - by cwd
    I have an account on a shared server. I have jailshell access and also PhpMyAdmin. I want to run mysql commands via SSH but I'm getting an error: $ mysql -u mySqlUser -p mySqlPw Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' I can connect with PHP and phpMyAdmin, so would it be possible to call mysql from the shell and have it connect via an ip and port instead of the socket? The file /var/lib/mysql/mysql.sock does not exist - maybe that is intentional, and the only thing in /etc/my.cnf is [mysqld] skip-innodb More Info I don't have access to change system settings. I did a search in /var for mysql.sock but found nothing. However, phpMyAdmin might be connecting via a socket somehow: Really it would just be great if I could connect via IP. Also tried these two syntaxes: $ mysql -u mySqlUser -p mySqlPw -h localhost $ mysql -u mySqlUser -p mySqlPw -h localhost -P 3306 Both with the same result: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

    Read the article

  • Mysql Servers for Attendance System

    - by foo
    I'm building an attendance system. There are about 20 places where people will check in and check out using Mifare 1K Card. It will use MySQL as the database. The system will display something like "#ID IN: 800AM" when the first time the user checks in and "#ID OUT: 400PM" when the user checks out. For this to work, all the databases need to be synchronized with each other all the times. For an example, if user A went to location #1 to check in but by the time he wants to return home, the server at location #1 went down, he needs to go to location #2 or the nearest server to check out. The server at location #2 should display '#ID OUT: 400PM" and not "#ID IN: 400PM" since he's already checked in. So, what should I use to ensure this idea will work? My main concern is with the network (other department manages it) which is very unpredictable. It just love to go down anytime it wants to. Update LOL, didn't realize my question is not clear, just noticed it when you guys pointed it out, sorry about that. My real question is, how can I configure my MySQL to be synchronized with each other (20 servers)? MySQL cluster ? (tried reading about it, but I'm not sure if it's the right thing to do) My current setup (first phase): Local database for each server OS: Slackware A main server that keeps track which staff is at which server A web based front end for the user to see their history (which connects to the server based on their records) Main Pros No worries about network problems since it is a local database Main Cons A user can only check in and out at the same server. Databases/Servers are not connected with each others. Have to add the user to each server if the users want to check in at different locations. Which means, if he wants to go to location A, he must be checked out from location A first and then check in at location B. The server at location B didn't know that the user has checked in before at A. By the way, I've already centralized my NTP to a local server. About the network, let's just say, I don't have the authority to make changes so that the network will be better. The network won't effect all 20 servers at once, usually, just a few of them for several times a week. If there are anything else you would like me to answer, please just ask.

    Read the article

  • cannot add a user to sysadmin role in SQL Server

    - by George2
    Hello everyone, I am using SQL Server 2008 Management Studio. The current logon account belongs to machine local administrator group. I am using Windows Integrated Security mode in SQL Server 2008. My issue is, after log into SQL Server Management Studio, I select my login name under Security/Logins, then select Server Roles Tab, then select the last item -- sysadmin to make myself belong to this group/role, but it says I do not have enough permission. Any ideas what is wrong? I think local administrator should be able to do anything. :-) thanks in advance, George

    Read the article

  • Changing Windows 'hosts' file in guest OS under Parallels Desktop 6

    - by Jan
    Hi all, I am running Win7 in a Parallels Desktop 6 on Mac. I would like to modify my Windows hosts file. When doing this through notepad it says "You don't have permission to save in this location..." I am logged on as a regular windows user - not as 'local admin'. How can I edit the file? How can I grant my regular user 'local admin' rights? How can change the Windows user to 'admin' ... this option seems to be missing in my windows install... Does anybody recognize the issue? Thank you! J.

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Explaining Git to someone new to revision control

    - by MaxMackie
    I've recently decided to jump into the whole world of revision control to work on some open source projects I have. I looked around (subversion, mercurial, git, etc) and found that Git seemed to make more sense conceptually to me. I've set everything up on my computer (opensuse) and made an account on gitorious (let me know if there is a more simple/better hosting provider). I understand Git from a conceptual point of view (work locally, commit to a local repo, others can now checkout from you, right?). But where does gitorious come into play? I commit to them as well as committing locally? Apart from conceptually, I don't quite understand HOW it works when it comes to making a local repository and running git init inside a folder and that HEAD file. Keep in mind I have never used any form of revision control ever before. So even the most basic concepts are foreign to me. As I post this, I'm also reading up and trying to figure it out myself.

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • How should I troubleshoot a problematic wireless connection on Linux?

    - by Gearoid Murphy
    I recently purchased a netgear 150 usb wireless dongle for use with my 11.10 Xubuntu amd64 system. Using the network-manager interface, I can see local wireless networks and enter the authentication details for my local wireless lan. Unfortunately, the connection does not seem to work, I keep getting notifications that my wireless has disconnected (but none indicating that I've connected). When I examine syslog, it seems to indicate that I've successfully associated with the wireless switch and that dhcp has successfully acquired an ip address but the log shows that the dhcp process keeps sending requests, eventually dropping the connection. 'ifconfig wlan0' never shows the dhcp address logged in syslog. I suspect that the problem lies with the usb dongle, my configuration or the wireless switch but I am not certain how to isolate the problem, can anyone provide some insight on how I should go about homing in on the cause of this problem or verifying the functionality of the individual components, thanks.

    Read the article

  • Files not accessible

    - by gokul
    My system is running on a pc with C:\ Drive out of space. So I tried to delete some file and clean up to get more space. I found that the %Temp% {C:\Users\Username\AppData\Local\Temp} takes lots of space and tried to delete files in it. But when I open it , it alerted me with the message C:\Users\Username\AppData\Local\Temp is not accessible The file or directory is corrupted and unreadable? What to do? Is deleting files from Temp harmful to computer?

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • Using Dynamic LINQ to get a filter for my Web API

    - by Espo
    We are considering using the Dynamic.CS linq-sample included in the "Samples" directory of visual studio 2008 for our WebAPI project to allow clients to query our data. The interface would be something like this (In addition to the normal GET-methods): public HttpResponseMessage List(string filter = null); The plan is to use the dynamic library to parse the "filter"-variable and then execute the query agains the DB. Any thoughts if this is a good idea? Is it a security problem?

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • For Intel cpu , if chipset and motherboard are also from intel then it will give best performance. I

    - by metal gear solid
    I'm going to purchase Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) and for motherboard my local vendor suggesting me to purchase Intel DG41RQ MB motherboard and he is also saying if i'm purchasing Intel CPU then purchasing Intel's Own motherboard with intel chipset will give best performance. Is it true? To get good inbuilt graphic I'm thinking to purchase nvidia chipset based motherboard of any other company like Asus, Gigabyte, MSi etc. is it ok? Although i never play games on my PC but thinking Inbuilt Nvidia graphics will be better for running Photoshop and watching movies then Intel's inbuilt graphics. or it's ok to purchase Intel DG41RQ MB motherborad as suggested by local vendor. Intel's inbuilt graphics would be enough for Photoshop and Watching movies. If you know any other good motherboard for Intel® Core™2 Duo Processor E7500 (3M Cache, 2.93 GHz, 1066 MHz FSB) then tell.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >