Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 501/893 | < Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >

  • During Vista Repair - No operating system is listed.

    - by Jack Marchetti
    After a Windows update, my brother's Gateway computer loads to the "Step 3 of 3: 0%" and reboots. Safe Mode does not work. I placed a Vista DVD in the drive, and re-booted. (Note, this is my Vista DVD, not the Recovery/System disc that would come with a computer. Gateway does not give you CD's anymore. I believe they store recovery on a partition, but that partition has been wiped out). I chose "Repair Your Computer" I get a dialog box, but no operating system is listed. I'm then prompted to "Load Drivers". What drivers am I supposed to be loading here and where from? I placed a CD in the drive to "load drivers" but I don't see my DVD drive listed. All I saw where X:/Sources along with several Removable Media slots that were empty. On another screen I tried Startup Repair, which didn't do anything. I attempted to use System Restore - but it doesn't detect the hard drive. I'm guessing that I'm missing some sort of SATA driver and that is why the hard disk is not being found. Any ideas on this?

    Read the article

  • When is a 'core' library a bad idea?

    - by Alex Angas
    When developing software, I often have a centralised 'core' library containing handy code that can be shared and referenced by different projects. Examples: a set of functions to manipulate strings commonly used regular expressions common deployment code However some of my colleagues seem to be turning away from this approach. They have concerns such as the maintenance overhead of retesting code used by many projects once a bug is fixed. Now I'm reconsidering when I should be doing this. What are the issues that make using a 'core' library a bad idea?

    Read the article

  • How to Play Classic Arcade Games On Your PC

    - by Jason Fitzpatrick
    New games with their fancy textures, 3D modeling, and immersive environments have their charm, sure, but what if you crave some old-school arcade gaming? Read on to see how you can turn your computer into an virtual arcade cabinet. Vintage games ran on hardware significantly less powerful than that found in modern desktop computers. With the right software, a joystick or two (if you want to make experience feel more authentic), and a little digging online to find your favorite games, it’s easy to play the arcade hits of your childhood. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • How to Forward Ports to a Virtual Machine and Use It as a Server

    - by Chris Hoffman
    VirtualBox and VMware both create virtual machines with the NAT network type by default. If you want to run server software inside a virtual machine, you’ll need to change its network type or forward ports through the virtual NAT. Virtual machines don’t normally need to be reachable from outside the virtual machine, so the default is fine for most people. It actually provides some security, as it isolates the virtual machine from incoming connections. How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Compelling Reasons For Migrating to Oracle Database 11g

    - by margaret hamburger
    IDC's white paper Maximizing Your Investment in Oracle Application Software: The Case for Migrating to Oracle Database 11g  describes compelling business, operational, and technical reasons for Oracle application customers to upgrade or Oracle Database 11g. In researching this paper, IDC found that the upgrade process is smooth, the latest version offers key benefits over older versions, and the new features and procedures are easy to learn and well worth implementing. The paper highlights users of Oracle applications, such as Oracle's PeopleSoft Enterprise applications, Siebel Customer Relationship Management (CRM) applications, and Oracle E-Business Suite. It includes a review of Oracle Database 11g improvements and new features along with best practices from Oracle users who upgraded to Oracle Database 11g.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Broadcom STA driver doesn't work well with BCM4313

    - by Oli
    Following on from my other question about our new Samsung Q330, I've noticed that the wireless is incredibly flakey. It can connect but after a little use, especially if it does a lot of downloading at once (read: install something from the Software Centre), the connection stops working. Network Manager still see the connection, there's just no network throughput. I've simple tests like pinging other local network hosts and they just fail. The Samsung Q330 has a Broadcom BCM4313 wifi card (it's proper ID: 14e4:4727) and it's running on the Broadcom STA drivers that Jockey suggests (it didn't work at all without this). I did try installing b43-fwcutter but this just didn't do anything. I was expecting a configuration screen to come up (to select a firmware) but it never did. This page suggests the newer brcm80211 driver might be able to help, but I don't know how to install that. If you think this is the right route, please let me know how one goes about installing it.

    Read the article

  • Where to find clients who are willing to pay top dollar for highly reliable code?

    - by Robin Green
    I'm looking to find clients who are willing to pay a premium above usual contractor rates, for software that is developed with advanced tools and techniques to eliminate certain classes of bugs. However, I have little experience of contracting, and relatively few contacts. It's important to state that the kind of tools and techniques I'm thinking of (e.g. formal verification) are used commercially extremely rarely, as far as I'm aware. There is kind of a continuum of approaches to higher reliability, with basic testing and basic static typing at one end and full-blown formal verification at the other, but the methods I'm thinking of are towards the latter end of the spectrum.

    Read the article

  • Getting a Conexant CX23885 TV Capture Card working

    - by Benny
    I'm new to Linux, and am trying to get my Capture Card working on 11.04. The only command that I know to run to find out any information is lspci, which tells me that I have 02:00.0 Multimedia video controller: Conexant Systems, Inc. CX23885 PCI Video and Audio Decoder (rev 04) I've looked at using Me TV, but haven't worked out how to configure it for my card, or what I need to do to get it running. I'm not fussed on what software I use to run the Capture Card, but I've currently got only Me TV installed. Edit: When I run tvtime, I get the following errors: videoinput: Cannot open capture device /dev/video0: No such file or directory mixer: find error: Success mixer: Can't open mixer default, mixer volume and mute unavailable. mixer: Can't open device default/Line, mixer volume and mute unavailable. Segmentation fault

    Read the article

  • Ubuntu questions - important

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Ubuntu 12.10 64bit fresh install, wireless issue!

    - by Dave
    Just installed a fresh Ubuntu 12.10 64bit on my laptop, run the update manager, restarted and suddenly I can't use my wifi anymore. Ubuntu software center installed automatically the wifi additional driver as you can see in my screenshot. If I mark the option "Do not use the device" and apply changes, restarting Ubuntu my wifi is back and I can use it. If I run iwconfig my terminal is showing this Now if I use Ubuntu for more than 20 minutes surfing the web my wifi it keeps to be connected but I don't receive any signal from it. Any page I try to open it simply don't open (just waiting icon). If I disconnect my wifi and connect it again, same issue, it doesn't work. The only way to make it work again is to restart Ubuntu. And the same story it happens again after aprox. 20, 30 minutes. WIFI device details: 03:00.0 Network controller [0280]: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller [14e4:4727] (rev 01) Thanks, Dave

    Read the article

  • Artificial Intelligence implemented in x86 Assembly? [closed]

    - by Bigyellow Bastion
    Okay, so I decided that for my upcoming operating system, I do basically everything in x86 Assembly, using only 16-bit mode. I will need to write the software to host on it once I have something up and going, and I'll definitely post the source and VM-executable file. But as for now I'm stuck with the idea of implementing the AI code for some of the games I'm making to host on it. AI in Assembly is tedious, and sometimes almost impossible seeming, especially complex AI(I'm talking SNES Super Mario World 2: Yoshi's Island AI here, by the way, not pong AI). I was thinking that it'd be such a hassle that I'd have to bring a higher-level language to work some of this out here, like maybe C++ or C#, but I'd have to go through more work linking it into a fine binary that my OS will host, and that adds unnecessary work to the table I wanted to avoid(I don't want a complex system, I want everything as bare-bones as possible, avoiding libraries, APIs, and linkable formats for now, to make everything more directly accessible to the kernel's API).

    Read the article

  • How do you organize information?

    - by zvrba
    I have a relatively large collection of useful books (paper and electronic), [academic] papers (mostly electronic) and web bookmarks. However, I don't have an overview of the material. Currently I have most of the electronic (PDF/DJVU) material in a single hierarchical folder and use filename search. Two questions. Do you have a similar problem, and how do you deal with it? Can you recommend some software to help with organizing bibliographic information, including web links. Easy editing of hierarchy and tags is a must. I would also like to be able to write own comments for each entry. [With my current scheme, just using the filesystem, does not provide other metadata.] A plugin for emacs would be perfect, but it's not a must. (org-mode MIGHT be adequate).

    Read the article

  • Programming language features that help to catch bugs early

    - by Christian Neumanns
    Do you know any programming language features that help to detect bugs early in the software development process - ideally at compile-time or else as early as possible at run-time? Examples of well-known and effective bug-reducing features are: Static typing and generic types: type incompatibility errors are detected by the compiler Design by Contract (TM), also called Contract Programming: invalid values are quickly detected at runtime (through preconditions, postconditions and class invariants) Unit testing I ask this question in the context of improving an object-oriented programming language (called Obix) which has been designed from the ground up to 'make it easy to quickly write reliable code'. Besides the features mentioned above this language also incorporates other Fail-fast features such as: Objects are immutable by default Void (null) values are not allowed by default The aim is to add more Fail-fast concepts to the language. If you know other features which help to write less error-prone code then please let us know. Thank you.

    Read the article

  • Oracle Keeps Growing Partner Certifications with Addition of McAfee

    - by Ted Davis
    Viruses stink. Whether it’s the common cold virus, Goatpox virus – yes it exists -- or a computer virus, you name it, viruses stink. When it comes to our computer server infrastructure we all want to make sure our servers are secure from any malware out there. Additionally, installation of anti-virus software is a requirement by many governments and for many enterprises both large and small. Because of the growth of Oracle Linux in their customer base, McAfee recently certified their “McAfee VirusScan Enterprise for Linux” on Oracle Linux.  It delivers always-on, real-time anti-virus protection for Linux environments. Its unique, Linux-based on-access scanner constantly monitors the system for potential attacks. While there have been few viruses found on Linux, you can now feel secure running Oracle Linux in your infrastructure with McAfee on top. We are happy to introduce McAfee into the Oracle Linux family of certified applications. 

    Read the article

  • Oracle Database 12c is here!

    - by Maria Colgan
    Oracle Database 12c was officially release today and is now available for download. Along with the software release comes a whole new set of collateral that explains in detail all of the new features and functionality you will find in this release. The Optimizer page on Oracle.com has all the juicy details about what you can expect from the Optimizer in Oracle Database12c.  There you will find the following 3 new white papers; What to expect from the Oracle Optimizer in Oracle Database 12c SQL Plan Management with Oracle Database 12c Understanding Optimizer Statistics with Oracle Database 12c Over the coming months we will also present an in-depth series of blog posts on all of the cool new Optimizer features in 12c so stay tuned for that and happy reading! +Maria Colgan

    Read the article

  • High level project workflow

    - by user775060
    We are a small software company trying our hand at our second game. Since our first games' process was a living nightmare (since we used webdevelopment workflow) I have decided to educate myself on how to manage a game project on a high level. How does your process work, from idea to launch? Preferably in situations where you have a team that needs to cooperate. I've seen these 2 links, which are useful in a way, but was wondering if there are better/more comprehensive ways to do this? http://www.goodcontroller.com/blog/?p=136 http://gogogic.wordpress.com/2009/02/09/symbol6-how-we-created-an-iphone-game/ All input would be infinitely appreciated.

    Read the article

  • How To Specify Bitrate, Codec and Demultiplexing for VLC Video Capture or Recording

    - by Subhash
    I capture video from old TV tuner card - Pinnacle PCTV - using VLC. The video is from the Composite input and audio is from I guess the mixer or Line in. The command I use is: vlc v4l2:///dev/video0:normal=pal:width=720:height=576:input=1 :input-slave="alsa://hw:0,0" In VLC, I have enabled the Advanced Controls toolbar, which allows me to record videos when I want to. However, these videos are uncompressed - very big and play only with VLC. Totem throws the "Could not demultiplex stream" error. I need to convert them using WinFF to reduce their size and make them playable with Totem and other software. My question is whether I can configure the recording settings - the codecs and the bitrate, and also get the stream demultiplexed. If I pass any -sout parameter with command I get a "Segmentation fault". I use 64-bit Ubuntu 10.10.

    Read the article

  • Debian - starting UFW (Uncomplicated Firewall) before network interfaces are operational

    - by Tomasz Zielinski
    I want to install UFW on Debian Lenny. Everything looks straightforward except that I don't know where to plug UFW startup script so that it configures iptables before hax0rs can break in. I've reviewed runlevel directories and in /etc/rc0.d, /etc/rc6.d and /etc/rcS.d there are items like these: S35networking -> ../init.d/networking S36ifupdown -> ../init.d/ifupdown Runlevel 0 and 6 are for shutdown and reboot so I guess nothing should be changed there, but runlevel S advertises itself (in README) like something for me: The scripts in this directory whose names begin with an 'S' are executed once when booting the system, even when booting directly into single user mode. The following sequence points are defined at this time: * After the S40 scripts have executed, all local file systems are mounted and networking is available. All device drivers have been initialized. (What bothers me is that both rc0/6.d and rcS.d point to the same networking and ifupdown scripts, but after looking at sources I believe those scripts are smart enough to figure out where to start and where to stop networking.) Now, I think that I should plug my /lib/ufw/ufw-init into /etc/rcS.d, with priority higher that the one of ifupdown and networking, i.e. <= 38 for my /etc/rcS.d. Am I right in this "analysis" ?

    Read the article

  • Storing changes to multiple databases in a single centralized database

    - by B4x
    The setup: multiple MySQL databases at different locations with the same scheme. The databases are in production. The motivation: we want to present information in these databases in a web interface, clearly showing which database the row originated from. We want to be able to get this data from one single source (for different reasons, one of them is pagination which gets tricky if you use multiple sources). The problem: how do we collect data from multiple databases, storing it at a central location and clearly marking the origin of each row? We have discussed using a centralized DB that tracks changes to the production DBs, with the same schema and one additional column for origin. If possible, we would like to avoid having to make changes in the production environment. Since we can't use MySQL's replication (multiple masters to a single slave isn't allowed), what are our other options? Are there any existing solutions for something like this or do we have to code something ourselves? Is the best solution to change the database schemas in production and add a column for origin? The idea of a centralized database isn't set in stone. If there is a solution to this that solves our other problems without a centralized DB, we can be flexible. Any help is much appreciated.

    Read the article

  • Comunicate NodeJS and PHP

    - by Zenth
    i need ideas to solve this: I have a entire website in PHP (5.2) in a PHP "shared server", only i can use apache+PHP, CGI & NodeJS, no memcached, redis or another software. And i need to comunicate the PHP and the NodeJS Script. My first approach is using socket connection, creating in NodeJS a socket listener and connect to it witch PHP, and then, send commands, whait for response, and close connection (and end PHP Script). To the other side, i can call PHP script via ¿httprequest? ¿or using sockets again? The problem of using sockets fron Node to PHP, i CANT leave PHP script runing with set_time_limit(0) because the fuc... server, need to "call" PHP for another way. The NodeJS and Apache + PHP are in the same machine, i need to make the code for the fast response time (sockets better than web-calls). Better ideas or other solutions? thanks!

    Read the article

  • Can AfferoGPLv3 code be used in GPLv3 code?

    - by Karel Bílek
    Can software with AGPLv3 license be used with GPLv3 project? Can the resulting project be GPLv3, or must it have the special requirements of AGPLv3? I am not very smart from clause 13 of GLPv3 that mentions AGPLv3. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. Must the resulting, combined work be AGPLv3 or not?

    Read the article

  • How Estimates Became Quotes

    - by Lee Brandt
    It’s our fault. Well, not completely, but we haven’t helped the situation any. All of what follows comes from my own experiences which, from talking to lots of other developers about it, seems to be pretty much par for the course. Where We Started When we first started estimating, we estimated pretty clearly. We would try to imagine something we’d done that was similar to the project being estimated and we’d toss it about in our heads a bit and see how much bigger or smaller we thought this new thing was, and add or subtract accordingly. We wouldn’t spend too much time on it, because we wanted to get to writing the software. Eventually, we’d come across some huge problem that there was now way we could’ve known about ahead of time. Either we didn’t see this thing or, we didn’t realize that this particular version of a problem would be so… problematic. We usually call this “not knowing what we don’t know”. It’s unavoidable. We just can’t know. Until we wade in and start putting some code together, there are just some things we won’t know… and some things we don’t even know that we don’t know. Y’know? So what happens? We go over budget. Project managers scream and dance the dance of the stressed-out project manager, and there is nothing we can do (or could’ve done) about it. We didn’t know. We thought about it for a bit and we didn’t see this herculean task sitting in the middle of our nice quiet project, and it has bitten us in the rear end. We now know how to handle this in the future, though. We will take some more time to pick around the requirements and discover all those things we don’t know. We’ll do some prototyping, we’ll read some blogs about similar projects, we’ll really grill the customer with questions during the requirements gathering phase. We’ll keeping asking “what else?” until the shove us down the stairs. We’ll take our time and uncover it all. We Learned, But Good The next time comes, and you know what happens? We do it. We grill the customer for weeks and prototype and read and research and we estimate everything down to the last button on the last form. Know what that gets us? It gets us three months of wasted time, and our estimate will still be off. Possibly off by a factor of four. WTF, mate? No way we could be surprised by something! We uncovered every particle. We turned every stone. How is it we still came across unknowns? Because we STILL didn’t know what we didn’t know. How could we? We didn’t know to ask. The worst part is, we’ve now convinced the product that this is NOT an estimate. It is a solid number based on massive research and an endless number of questions that they answered. There is absolutely now way you don’t know everything there is to know about this project now. No way there is anything you haven’t uncovered. And their faith in that “Esti-Quote” goes through the roof. When the project goes over this time, they might even begin to question whether or not you know what you’re doing. Who could blame them? You drilled them for weeks about every little thing, and when they complained about all the questions, you told them you wanted to uncover everything so there would be no surprises. SO we set them up to faile Guess, Then Plan We had a chance. At the beginning we could have just said, “That’s just a gut-feeling estimate, based on my past experience with similar projects. There could still be surprises.” If we spend SOME time doing SOME discovery and then bounce that against our own past experiences, we can come up with a fairly healthy estimate. We can then help the product owner understand that an estimate is a guess. Sure, it’s an educated guess, but it is still a guess. If we get it right it will be almost completely luck. Then, we help them to plan the development by taking that guess (yes, they still need the guess for planning purposes) and start measuring early and often to see if we still think we are right. We should adjust the estimate and alert the product owner as soon as we see problems (bad news does not age well) and we should be able to see any problems immediately if we are constantly measuring our pace. In lean software, we start with that guess and begin measuring cycle times immediately. Then we can make projections based on those cycle times and compare them to our estimate. This constant feedback is the best way to ensure that there are no surprises at the END of the project. There sill still be surprises, but we’ll see them sooner and have a better understanding of how they will affect our overall timeline. What do you think?

    Read the article

  • Installing gdm on headless Server Edition

    - by Andrew Koester
    I have Ubuntu Server running on a headless box, which is right now, almost entirely doing only software RAID and feels a little underused. I'd like to get into using Ubuntu as a desktop a little more. What do I need to do (install/etc.) to get Gnome while keeping the box itself headless? I'm not sure which packages to install or which steps to take. I figure I'll just use X over the network (Xming or the like) but something like NX might work.

    Read the article

  • Unable to drag brush tool in Photoshop CS5 in 12.04

    - by Rodrane
    I was able to use Photoshop CS5 perfectly on Ubuntu 11.10. After upgraded to 12.04 I've noticed that when I try to use brush tool in by dragging the left mouse button I cannot get continous effect on photos. It only effects the area I click with mouse. In the screenshot all the time I was holding the left mouse button but it only affected the image once, creating dots instead of lines. At first I thought its something with Wine. I reinstalled it then deleted Photoshop in my ApplicationData folder (Windows keeps user software options in this folder). None of these worked. I tried on my laptop and this strange problem exists there too.

    Read the article

< Previous Page | 497 498 499 500 501 502 503 504 505 506 507 508  | Next Page >