Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 476/1257 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Is it bad to have a very full hard drive on a high traffic database server?

    - by MikeN
    Running an Ubuntu server with MySQL for a high traffic production database server. Nothing else is running on the machine except the MySQL instance. We store daily database backups on the DB server, is there any performance hit or reason why we should keep the hard disk relatively empty? If the disk is filled up to 86%+ with the database and all of the backups, does it hurt performance at all? So would the DB server running with 86-90%+ full capacity perform less well in any way than the server running with only a 10% full disk? The total disk size on the server is over 1 TB so even 10% of the disk should be enough for basic O/S swapping and such.

    Read the article

  • Unreasonably slow stunnel

    - by Kit Sunde
    I setup stunnel on OSX to tunnel traffic to my Django dev server because Facebook needs HTTPS these days but I noticed it's being absurdly slow. It seems like it can only handle a single connection at a time and even the connection is slow when I'm connecting to localhost. I've tried using some performance tips found online and so my config is setup as: pid= # foreground=yes cert=./cacert.pem key=./privkey.pem libwrap=no debug=0 socket = l:TCP_NODELAY=1 socket = r:TCP_NODELAY=1 [https] accept=8443 connect=8000 Is there a way to get more performance or more suitable way of setting up HTTPS for my dev server?

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • Nginx vs Apache as reverse proxy, which one to choose

    - by mhd
    Hi, this kind of question maybe has been asked here but I couldn't find any that really match my question. Heard that nginx performance is quite impressive, but Apache has more docs, community(read:expert) to get help Now what I want to know, how both web servers compare in term of performance, easiness of config, level of customization,etc. AS REVERSE PROXY server in a vps environment?? I'm still weighing between the two for a ruby web app(not ROR) served with thin server. Specific answer will be much appreciated. General answer not touching the ruby part is okay. I'm still noob in web server administration.

    Read the article

  • Data recovery; nearly 1 tb of movies on a WD 3.5 tb personal cloud drive disappears with scanty traces

    - by Effector Dhanushanth
    I have a great collection of movies that I had stored in a logical mesh of folder on my 3.5 tb WD personal cloud drive. I woke up 1 morning and found that everything was fine with my data on this drive, except for my movie collection: There were two great folders, one "2sort" nd the other "segregated". out of all the segregated sub folders, only letter C D and 2 or 3 others remain. and the 2 sort folder, which has umpteen subfolders, amounting to more than 0.5 tb. is.. it's just gone!! this is a great downfall.. now this is a personal cloud drive and has no usb port etc. unfortunately to hardwire and recover files.. now I'm sure there are softwares out there that can help me recover my beloved movies from such an interestingly "hard-to-reach" (should I say?) device? what may that software be compadre, my happiness lies within your answer.. thank you.. remember, recovery software or (WD) personal cloud. :) these ovies were All, "hand-picked", over the course of ten years.. I just never catalogued my collection.. if I could just get the "list" of my lost collection, that'd be enough.. recovering em would be a bonus.. but they out to be damaged if I were to somehow recover you know? still, I'm certain they're all intact.. I guess the file index just got corrupted.. There surely is a veil of some sort that need to be thrown or pushed aside to reveal my movies.. what software can do/does that? thanks immensely!

    Read the article

  • LDAP groups not applying to filesystem permissions

    - by BeepDog
    System is ArchLinux, and I'm using nss-pam-ldapd (0.8.13-4) to connect myself to ldap. I've got my users and some groups in LDAP: [root@kain tmp]# getent group <localgroups snipped> dkowis:*:10000: mp3s:*:15000:rkowis,dkowis music:*:15002:rkowis,dkowis video:*:15003:transmission,rkowis,dkowis,sickbeard software:*:15004:rkowis,dkowis pictures:*:15005:rkowis,dkowis budget:*:15006:rkowis,dkowis rkowis:*:10001: And I have some directories that are setgid video so that the video group stays, and they're configured g=rwx so that members of the video group can write to them: [root@kain video]# ls -ld /srv/video drwxrwxr-x 8 root video 208 Oct 19 20:49 /srv/video However, members of that group, say dkowis cannot write into that directory: [root@kain video]# groups dkowis mp3s music video software pictures dkowis Total number of groups that dkowis is in is like 7, I redacted a few here. [dkowis@kain wat]$ cd /srv/video [dkowis@kain video]$ touch something touch: cannot touch 'something': Permission denied [dkowis@kain video]$ groups dkowis mp3s music video software pictures I'm at a loss as to why my groups show up in getent groups, but my filesystem permissions are not being respected. I've tried making a new directory in /tmp and setting it's group permissions to rwx, and then trying to write a file in there, it doesn't work. The only time it does work is if I open it wide up allowing o=rwx. That's obviously not what I want, and I'm not able to figure out what my missing piece is. Thanks in advance.

    Read the article

  • Why does HDTune report better performing drives 2 months after installing them?

    - by Rolnik
    OK, so this is really weird. I ran HDTune on a newly set-up home-built computer and got the following readings from my drives in mid-November. SSD 154 MB/s RAID1 87 RAID0 198 (software installs) RAID0 98 Swap drive Today, in January, I run HDTune (same version) and get these results, in MB/s: SSD 186 RAID1 98 RAID0 241 RAID0 98 (Swap drive) Here are more details that HDTune reports on the SSD drive: HD Tune: OCZ-VERTEX Benchmark Blockquote Transfer Rate Minimum : 135.4 MB/sec Transfer Rate Maximum : 219.4 MB/sec Transfer Rate Average : 185.7 MB/sec Access Time : 0.1 ms Burst Rate : 187.3 MB/sec CPU Usage : -1.0% To get to my question: Why are my hard drives improving in performance? Most of my logical drives are in some form of RAID, except for the SSD. Will this performance ever deteriorate? Note, none of my drives are a hybrid drive that uses some form of SSD to enhance the write/reads on actual platters.

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • How to diagnose issue between mobo, RAID, and SSD cache drive? [migrated]

    - by goober
    Background This issue is happening on my custom-built desktop. Relevant specs: Motherboard: ASUS P8Z68-V PRO Utilizing Intel RST technology (application that uses unused SSD as cache) Processor: Intel core i7-2600k (not overclocked) HDDs: RAID1 of 2x Seagate Barracuda 1TB (ST31000524AS) (RAID performed via z68 chipset) Machine has run fine for ~1 year with no issues, and has been well-maintained (dust, etc.) What Happened Random Freezing issues -- intermittent Looked at the RST application screen to see that the acceleration cache was listed as "unavailable" -- recommended that I power down and reconnect the drive. Reconnected the drive to no avail. Attempted to move the drive to another SATA port. Acceleration option disappeared from RST software. Now, the freeze happens whenever loading something particularly data-driven (a video, a game, etc.) Steps Attempted Reconnected the drive to no avail. Updated Intel RST software to v. 11.6.0.1030 to see if that made a difference. Attempted to move the drive to another SATA port. Acceleration option disappeared from RST software. Connected the drive as its own volume. Formatted it, ran disk check errors -- all seems fine. Reconnected the drive and selected it again as the cache drive. Now, what happens when there is a freeze: Machine freezes I am unable to perform any command Screen then goes black I hit the reset button During boot, all drives show as "Disabled" and I am told no volume can be found I then hit the reset button (or power off/on) again. Either the next time (or sometimes after repeating this once more), the metadata cache is reconstructed and the system boots fine, showing the SSD as a cache. Question I believe this is an issue with the SSD itself, but how can I be sure since connecting it separately appeared to show no problems? I want to make sure it's not an issue with the motherboard, SATA ports, etc.

    Read the article

  • Recommendations for good Unix MTA / groupware solutions? [closed]

    - by Jez
    Possible Duplicate: Exchange server replacement that runs on Linux I'm setting up a Debian server, and one of the things I need on it is an MTA. I don't want to use something like Exim or Postfix because I want something that ties in SMTP, POP3, and IMAP all in one (a la Microsoft Exchange). Most MTAs also seem to be hellishly difficult to configure. Try and read the Exim documentation; you could do a university degree on it (I'm not kidding). When you can get an HTTP server like Cherokee which is easy to configure and has a nice web interface, do MTAs or groupware solutions need to be that hard? I'm aware that some people think "the Unix way" is to have lots of different interacting pieces of software (like maybe an SMTP MTA, POP3 service, webmail service, and overarching manager to tie them all together), but I think this is a situation where that just makes things a lot harder to deal with and one large software suite fits in much more nicely. So, I'm looking for good open source software suites that will run on Debian that: Combine (at least) SMTP, POP3, and IMAP Are easy(ish) to configure Have a nice configuration web interface or GUI Are not defunct projects I don't mind if it's groupware and offers calendaring too, but I would only be using the e-mail functionality for now. Another nice-to-have would be built-in webmail (if we're combining a bunch of functionality, why not?) Note however that I do NOT need Outlook support. I am not really looking for an "Exchange replacement drop-in". The suites I've found so far that seem to match the above criteria (and have appropriate licenses) are Citadel, Kolab, and Zimbra. I'd appreciate anyone who has experience with any of these giving me the pros and cons of them, such as how easy they are to configure and what their performance is like. I'd also appreciate any other suggestions for solutions that fulfil my criteria that I may have missed out.

    Read the article

  • Application that will identify percentage of your system disk bandwidth used on a user-application by user-application basis?

    - by Warren P
    I always (subjectively) feel my computer is far too slow (however fast it is), and so I'm always looking for ways to measure and understand what my computer is actually doing, that is making it seem "slow" to me. It has been my observation that my software-developer workload is most often disk-bound (I am waiting for Disk I/O) more than CPU bound. What has made it worse, is that I am using a corporate PC that has in-memory active-scanning anti-virus software that I do not have control over, and also some IT department mandated services that seem to suck up a lot of available hard-disk bandwidth. The best tool I have seen (in Windows 7) is the Resource Monitor which I usually acess from the button in the task Manager. The disk IO page, however, seems to label Disk Activity at a very low level (for example, showing the Volume Shadow Storage, which is flushing information obviously written by something ELSE other than VSS itself, and then writes to Pagefile.sys, which are obviously due to Virtual Memory faults in some application). What I would like to know is if a utility exists that can add up all direct disk input and output by user-level process, or find the process or service that caused VM or VSS activity. In that way, I hope, you could establish a real idea of how much of your computer's precious disk subsystem bandwidth is attributable to a particular application. here's a scenario: MyApp.exe writes 100k/s and reads 100k/s directly. VSS ends up writing another 100k/s. pagefaults caused inside MyApp.exe cause another 100k/s of writes. So the total "cost" of MyApp.exe running, during a period of time (let's say 1 second) is 400k/s, whereas you can only directly observe half of that, in Resource Monitor. Is there a smarter disk-IO watching piece of software I can use?

    Read the article

  • Recoomend company to take care or webserver and wordpress management?

    - by javipas
    I'm interested in setting up a professional WordPress site but I'd like to explore the pssibilities to leave the management of the webserver and even WordPress' management to a company that guarantees great availability, performance of the site (load times, security) and even SEO. My site is currently running on other platform but I plan on a migration on the next 4 weeks. I've done this usually, but I'd like to focus on the content, so I don't have to mess with webserver/mysql/php configs in order to get nice performance. Is there some (maybe hosting) company that is dedicated to this? Would it be better to hire a sysadmin with experience in those matters?

    Read the article

  • Network Monitoring Tool Recommendation

    - by user42801
    Hello, My company is looking for a monitoring app/tool that would allow us to capture and graph statistics on network performance. As a starting point, we would like to ping remote host(s) and gateway(s) from several of our servers, grab an average of the ping times from each of our servers to the remote host(s), and then graph it (preferably in a central location). Also, we would like to be able to graph the results for time frames as short as a week to as long as 6 months. It is reasonable to expect that we would ask more of the selected monitoring app/tool as we come up with other key network performance indicators in the future. So an app with great flexibility and features would be ideal. Upon first glance, Cacti looks like it might be a fit. Any other recommendations? Thanks in advance for any input.

    Read the article

  • Local user vs. domain user? What is the right way here?

    - by ebeeb
    I'm a software developer in a company with 6 employees. Everyone has a machine for him-/herself, so none of the machines is shared. I'm currently setting up my machine with Windows 8 and was experimenting a bit with domain and local user accounts. Correct me please if I'm wrong, but I think the idea behind is, that domain users generally should not be able to modify the configuration of a machine (like installing software), since they are able to login on every single machine in the domain. The local user (usually just one local administrator per machine) is the one who cares about the configuration of the machine. But in my case the login into the domain is just for being able to access directories/servers in the domain (I do not really know the details, all I know is, that loggin into the domain user account is necessary). So overall I've got a local admin account and a domain account used on my machine. While working I'm logged in to my domain user account. But it annoys me, that I always need to enter the credentials of my local user account when I'm about to update/install something, which happens quite often as a software developer. I fixed this with adding the domain account into the user accounts in my control panel and putting it into the Administrators group. The only thing I wanted to know about this: is there something REALLY bad about doing this? Or is there maybe a more common way to be able to act like a local admin, while logged in as a domain user? PS: I'm sorry about the tags, but I don't know the proper ones. I'd be glad if some of the superuser experts could fix this :-)

    Read the article

  • NFS on top of GFS2 - does it work?

    - by Matthew
    We're currently using a NoSQL derivative called Splunk to receive our data. The software supports something called "search head pooling" in which the job-dispatching engine is housed on several servers which share a common storage point. Originally our intention was to use a clustered filesystem like GFS2 because of low latency, stability, and ease of setup. We set up GFS2, and it's working with no issues. However when trying to run the software, it's trying to create lock files, and a bunch of other things that their support team can't quite explain. Ultimate feedback from them was that they only support NFS. Our network administration team heavily frowns on NFS (lack of stability, file lock issues, etc). So, I was thinking about the possibility of setting up NFS on each server in the cluster to act as a wedge layer between the GFS2 filesystem and the software. Basically configure each server to export the GFS2 filesystem's mountpoint via NFS, and then tell each server to connect to that NFS share. That way we aren't introducing any single-points-of-failure should a dedicated NFS server go down, but the vendor gets their "required" NFS share. I'm just brainstorming ways around, so please tear this apart :)

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • InstallShield or Windows installer corrupted

    - by Bobby S
    Just recently I've been unable to install any software on my Windows 7 machine. Anything that uses InstallShield or the Windows installer will just hang or give a weird error. I noticed there will be many duplicate isbew64.exe processes (like 25) that launch and then just sit there or else a lot of msiexec.exe *32 processes, depending on what I'm trying to install. One piece of software specifically is the Logitech Harmony software. It gives me an *is_string_not_defined* error, saying c:\program files (x86)\:\ the filename, directory name, or volume label syntax is incorrect. The other thing I was trying to install was Battlefield: Bad Company 2, and that just hangs as well, and then just leaves all the Windows installer processes running in the background after I quit the install process. Very odd. I've checked well and googled these issues, it doesn't appear to be any sort of malware issue. I feel like it's related to some kind of corrupted installer application. I've rebooted, deleted the InstallShield folder in program files/common files as some places online suggested but to no avail. I have no idea what to do, any ideas?

    Read the article

  • Which SSL certificate to buy [closed]

    - by Sparsh Gupta
    I am reading several notes on SSL certificates and comparison. What matters to me the most is speed. I can read that encryption is same with all different certificates available but I was wondering if there is any difference in the performance of the website with different certificates involved. I am ofcourse interested in end to end response times and I wonder if the type of encryption or number of certificates required as Chain Certificates makes a difference in speed. I dont really care for cost but looking for a good SSL certificate which ideally gives me absolutely no pain and best performance. Recommendations?

    Read the article

  • Tips for maximizing Nginx requests/sec?

    - by linkedlinked
    I'm building an analytics package, and project requirements state that I need to support 1 billion hits per day. Yep, "billion". In other words, no less than 12,000 hits per second sustained, and preferably some room to burst. I know I'll need multiple servers for this, but I'm trying to get maximum performance out of each node before "throwing more hardware at it". Right now, I have the hits-tracking portion completed, and well optimized. I pretty much just save the requests straight into Redis (for later processing with Hadoop). The application is Python/Django with a gunicorn for the gateway. My 2GB Ubuntu 10.04 Rackspace server (not a production machine) can serve about 1200 static files per second (benchmarked using Apache AB against a single static asset). To compare, if I swap out the static file link with my tracking link, I still get about 600 requests per second -- I think this means my tracker is well optimized, because it's only a factor of 2 slower than serving static assets. However, when I benchmark with millions of hits, I notice a few things -- No disk usage -- this is expected, because I've turned off all Nginx logs, and my custom code doesn't do anything but save the request details into Redis. Non-constant memory usage -- Presumably due to Redis' memory managing, my memory usage will gradually climb up and then drop back down, but it's never once been my bottleneck. System load hovers around 2-4, the system is still responsive during even my heaviest benchmarks, and I can still manually view http://mysite.com/tracking/pixel with little visible delay while my (other) server performs 600 requests per second. If I run a short test, say 50,000 hits (takes about 2m), I get a steady, reliable 600 requests per second. If I run a longer test (tried up to 3.5m so far), my r/s degrades to about 250. My questions -- a. Does it look like I'm maxing out this server yet? Is 1,200/s static files nginx performance comparable to what others have experienced? b. Are there common nginx tunings for such high-volume applications? I have worker threads set to 64, and gunicorn worker threads set to 8, but tweaking these values doesn't seem to help or harm me much. c. Are there any linux-level settings that could be limiting my incoming connections? d. What could cause my performance to degrade to 250r/s on long-running tests? Again, the memory is not maxing out during these tests, and HDD use is nil. Thanks in advance, all :)

    Read the article

  • Subversion COPY/MOVE - File not found: transaction 'XXX-XX'

    - by theplatz
    I'm attempting to create a branch in one of my subversion repositories and keep running into an error. No mater what is done, I keep getting the following: File not found: transaction '3062-2e6', path '/Software/XXXXXX/branches/testbranch' I've noticed that the first part of the '3063-3e6' in the above message is the last successful committed revision in the repository. My apache logs don't give much more information: [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Could not MOVE/COPY /svn/p070361/!svn/bc/3049/Software/SXXXXXX/trunk. [404, #0] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] Unable to make a filesystem copy. [404, #160013] [Wed Nov 24 14:10:38 2010] [error] [client x.x.x.x] File not found: transaction '3059-2e2', path '/Software/XXXXXX/branches/testbranch' [404, #160013] This is all happening on a server with an nginx frontend that proxies to Apache for the subversion bits. Other repositories are able to branch fine and I was able to create the branch using file:/// from the command line on the server this is occurring on. The permissions on this repository match every other repository and disk space is not an issue.

    Read the article

  • Is there an Installer Analyser tool that can list what Registry Keys will be created?

    - by EvoGamer
    I can think of 3 ways to achieve my goal: Create a clean VPC, install a given piece of software, and compare the before and after states. Somehow reverse-engineer the installer. Somehow redirect the output of the installer in question so that all registry calls and copy/move file commands are recorded, but not executed. The first option can be done manually, or potentially automated, but I feel it's rather OTT for my needs. The second could cause all sorts of licencing issues, not to mention it may not always return a correct result. Also, without delving into hex editing, I can't think of a way that it would be possible to do manually (some installers - eg Anti-Virus software - may react unfavourably on automated attempts to investigate the installer). The third option shows the most promise, although if the first could be stripped down into a lightweight throwaway environment, it would work pretty much the same way. However, I'm not sure how to do it. So my question is: What tools are available (if any) and/or how could I find out this information manually? I'm not looking to reverse-engineer anything (if I can help it), but I just want to know exactly what changes are being made to my PC by a given piece of software.

    Read the article

  • Internet compression proxy for low speed broadband?

    - by user23150
    I live in a rural location, using high-latency wireless off a local ISP's tower. My speed tests vary day to day, but I can get around 1Mb up/down. The problem is, I work with large files, uploading and downloading (HD videos, development software, etc.). It can be painful to wait sometimes. Plus I do some side contract game development, and it can be very difficult to playtest with other developers (200ms ping is a good day for me). Now, obviously it's not going to be easy to solve the latency problem without different wireless hardware. But speedwise, I am wondering if I can use some kind of compression technology on a proxy. For instance, my work computer has full access to a 26Mb down, 10Mb up connection, that is totally unused at night and the weekends. If I could run some kind of compression technology on our server, and use it as a proxy to route to my home computer, I could stand to gain some major speed. I realize that by bogging down a system with compression, I could potentially lose whatever speed gain I had. But the proxy server is a quad core xeon, and the receiving computer is a pretty decent i7 computer, so that shouldn't be a concern. I found http://toonel.net/ but it seems more geared toward very slow narrowband users, like dial-up. Plus, I would prefer to just be able to point my browser to a proxy server, rather then install software on my client machine. EDIT I thought about my question a little more, and realize I am going to need to install software on my client in order to decompress, and possible compress (for uploading). That's not a huge deal.

    Read the article

  • Dell Inspiron 530 - SSD Worth it?

    - by DrFredEdison
    I'm going to be upgrading my Dell Inspiron 530 (2.0 Ghz Intel Dual Core CPU, 3 GB RAM) to windows 7 soon, and rather than backup and reformat my existing drive, I'm planning on getting a 2nd drive to replace my current primary, and moving it to a secondary. Thus, this seems like an excellent time to get a solid-state drive, if its going to be worth it. As far as I can tell this machine has a SATA-I controller, and I'm unsure if I'll see a noticeable performance increase with an SSD without going to SATA-II. So I have a three part question here given all that: Will spending the money on a SSD be worth it if hook it into a SATA-I controller? Is it reasonable to upgrade the controller on this machine to a SATA-II controller? Given that this PC is kind of old to begin with, am I better off performance wise to just stick with a faster HDD?

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >