Search Results

Search found 3015 results on 121 pages for 'tim scott'.

Page 36/121 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Links won't open in Outlook

    - by Scott Mitchell
    About a week ago Chrome (my default browser) stopped working and I ended up reinstalling. After that, however, links in Outlook 2007, when clicked, no longer open. Instead, I get a warning that reads: General failure. The URL was: URL. Application not found. I presume that there is some MIME or file type association that needs to be configured at the OS level, but I've not had any luck so far. My operating system is Windows 7. Any ideas? EDIT #1 Wayne Johnston suggested I set Chrome as my default browser, but when I go to the Set Default Programs screen in Windows 7 (via Control Panel) I do not see Chrome in the list of programs. How do I get it to show up there?

    Read the article

  • Change Drobo directory permissions

    - by Steven Scott
    I have a Drobo unit that is connected via fire-wire to a Dell notebook. Using Ubuntu 12.04. I can not seem to change the permissions to allow all users to have read/write access to the drives. The unit is automatically mounting the volumes as my user using the system, so other applications can not access the device. I want to set up a Plex Media Server to stream music, etc... but it will not scan the drives since it can not access them. How can I change the permissions to allow everyone to read the volumes? IF I add them to the fstab as ntfs volumes, Ubuntu reports that they are not available during the boot up, likely due to the fire-wire not having found the drives yet. Any ideas or suggestions would be greatly appreciated.

    Read the article

  • How do I remove Slony from a restored PostgreSQL database?

    - by Scott Herbert
    I've restored a database which came from a server on which Slony was running. The server on which the database has been restored does not have Slony installed. When the database restored, there were a lot of errors reported, with Slony related objects not getting created due to Slony related logins being missing. This I thought was not a problem, as losing the Slony objects didn't seem to matter, and infact seemed desirable. However, now I've got an anoying, if not critical problem. Whenever one clicks on a table in the newly restored DB in PGAdmin, a Slony related error popup ... pops up. The first one reads: "An error has occured: ERROR: function _rmscl.getlocalnodeid(unknown) does not exist" I notice that under the Replication node in PGAdmin, that there is a Slony replication cluster. Trying to drop this cluster results in more object missing type errors. Does anyone have any ideas how we can remove the last vestiges of Slony from this database?

    Read the article

  • Why is my filesystem being mounted read-only in linux?

    - by Tim
    I am trying to set up a small linux system based on Gentoo on a VirtualBox machine, as a step towards deploying the same system onto a low-spec Single Board Computer. For some reason, my filesystem is being mounted read-only. In my /etc/fstab, I have: /dev/sda1 / ext3 defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 none /dev/shm tmpfs defaults 0 0 However, once booted /proc/mounts shows rootfs / rootfs rw 0 0 /dev/root / ext3 ro,relatime,errors=continue,barrier=0,data=writeback 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 udev /dev tmpfs rw,nosuid,relatime,size=10240k,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 none /dev/shm tmpfs rw,relatime 0 0 usbfs /proc/bus/usb usbfs rw,nosuid,noexec,relatime,devgid=85,devmode=664 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 (the above may contain errors: there's no practical way to copy and paste) The partition at /dev/hda1 is clearly being mounted OK, since I can read all the data, but it's not being mounted as described in fstab. How might I go about diagnosing / resolving this? Edit: I can remount with mount -o remount,rw / and it works as expected, except that /proc/mounts reports /dev/root mounted at / rather than /dev/sda1 as I'd expect. If I try to remount with mount -a I get mount: none already mounted or /sys busy mount: according to mtab, sysfs is already mounted on /sys Edit 2: I resolved the problem with mount -a (the same error was occuring during startup, it turned out) by changing the sysfs and proc lines to proc /proc proc [...] sysfs /sys sysfs [...] Now mount -a doesn't complain, but it doesn't result in a read-write root partition. mount -o remount / does cause the root partition to be remounted, however.

    Read the article

  • Epson Artisan 800 on Ubuntu/Linux

    - by Tim Lytle
    Update for Ubuntu 10.04: Printing should work 'out-of-box', scanning still needs the newer sane backend. Looking for a known good way to setup an Epson Artisan 800 on Ubuntu specifically or any linux box in general. It is a printer/scanner with ethernet/wifi/usb. I'd like to use it as a network printer/scanner being able to do both from my Windows and Ubuntu machines; however, if it needs to be physically connected to a computer (preferably the Ubuntu machine) that is doable (again, then sharing print/scan functions to the network). Basically, I'm looking for someone who has used this printer/scanner (or similar) in a multi-platform environment to share how the set it up and how well it worked. Updated: A little more information, like most printers (I expect) the documentation for the printer basically says, "don't use plug-n-play, run our setup CD from your Windows/Mac system", to do anything (set it up for network use even). I guess that's to make it easy for anyone else to setup, but when you're looking to use it with an unsupported (by Epson's documentation) OS, you're just stuck on your own. What I was hoping for was someone who could say, "Forget the bundled software, do [this] to set it up on wifi manually, install [this] to connect to the scanner from [os], printing works with [this] driver - at least that's how I set it up." I'll will (and have so far) use the information here, and post my own setup when I'm done, if there's no one else out there with that experience.

    Read the article

  • What to filter when providing very limited open WiFi to a small conference or meeting?

    - by Tim Farley
    Executive Summary The basic question is: if you have a very limited bandwidth WiFi to provide Internet for a small meeting of only a day or two, how do you set the filters on the router to avoid one or two users monopolizing all the available bandwidth? For folks who don't have the time to read the details below, I am NOT looking for any of these answers: Secure the router and only let a few trusted people use it Tell everyone to turn off unused services & generally police themselves Monitor the traffic with a sniffer and add filters as needed I am aware of all of that. None are appropriate for reasons that will become clear. ALSO NOTE: There is already a question concerning providing adequate WiFi at large (500 attendees) conferences here. This question concerns SMALL meetings of less than 200 people, typically with less than half that using the WiFi. Something that can be handled with a single home or small office router. Background I've used a 3G/4G router device to provide WiFi to small meetings in the past with some success. By small I mean single-room conferences or meetings on the order of a barcamp or Skepticamp or user group meeting. These meetings sometimes have technical attendees there, but not exclusively. Usually less than half to a third of the attendees will actually use the WiFi. Maximum meeting size I'm talking about is 100 to 200 people. I typically use a Cradlepoint MBR-1000 but many other devices exist, especially all-in-one units supplied by 3G and/or 4G vendors like Verizon, Sprint and Clear. These devices take a 3G or 4G internet connection and fan it out to multiple users using WiFi. One key aspect of providing net access this way is the limited bandwidth available over 3G/4G. Even with something like the Cradlepoint which can load-balance multiple radios, you are only going to achieve a few megabits of download speed and maybe a megabit or so of upload speed. That's a best case scenario. Often it is considerably slower. The goal in most of these meeting situations is to allow folks access to services like email, web, social media, chat services and so on. This is so they can live-blog or live-tweet the proceedings, or simply chat online or otherwise stay in touch (with both attendees and non-attendees) while the meeting proceeds. I would like to limit the services provided by the router to just those services that meet those needs. Problems In particular I have noticed a couple of scenarios where particular users end up abusing most of the bandwidth on the router, to the detriment of everyone. These boil into two areas: Intentional use. Folks looking at YouTube videos, downloading podcasts to their iPod, and otherwise using the bandwidth for things that really aren't appropriate in a meeting room where you should be paying attention to the speaker and/or interacting.At one meeting that we were live-streaming (over a separate, dedicated connection) via UStream, I noticed several folks in the room that had the UStream page up so they could interact with the meeting chat - apparently oblivious that they were wasting bandwidth streaming back video of something that was taking place right in front of them. Unintentional use. There are a variety of software utilities that will make extensive use of bandwidth in the background, that folks often have installed on their laptops and smartphones, perhaps without realizing.Examples: Peer to peer downloading programs such as Bittorrent that run in the background Automatic software update services. These are legion, as every major software vendor has their own, so one can easily have Microsoft, Apple, Mozilla, Adobe, Google and others all trying to download updates in the background. Security software that downloads new signatures such as anti-virus, anti-malware, etc. Backup software and other software that "syncs" in the background to cloud services. For some numbers on how much network bandwidth gets sucked up by these non-web, non-email type services, check out this recent Wired article. Apparently web, email and chat all together are less than one quarter of the Internet traffic now. If the numbers in that article are correct, by filtering out all the other stuff I should be able to increase the usefulness of the WiFi four-fold. Now, in some situations I've been able to control access using security on the router to limit it to a very small group of people (typically the organizers of the meeting). But that's not always appropriate. At an upcoming meeting I would like to run the WiFi without security and let anyone use it, because it happens at the meeting location the 4G coverage in my town is particularly excellent. In a recent test I got 10 Megabits down at the meeting site. The "tell people to police themselves" solution mentioned at top is not appropriate because of (a) a largely non-technical audience and (b) the unintentional nature of much of the usage as described above. The "run a sniffer and filter as needed" solution is not useful because these meetings typically only last a couple of days, often only one day, and have a very small volunteer staff. I don't have a person to dedicate to network monitoring, and by the time we got the rules tweaked completely the meeting will be over. What I've Got First thing, I figured I would use OpenDNS's domain filtering rules to filter out whole classes of sites. A number of video and peer-to-peer sites can be wiped out using this. (Yes, I am aware that filtering via DNS technically leaves the services accessible - remember, these are largely non-technical users attending a 2 day meeting. It's enough). I figured I would start with these selections in OpenDNS's UI: I figure I will probably also block DNS (port 53) to anything other than the router itself, so that folks can't bypass my DNS configuration. A savvy user could get around this, because I'm not going to put a lot of elaborate filters on the firewall, but I don't care too much. Because these meetings don't last very long, its probably not going to be worth the trouble. This should cover the bulk of the non-web traffic, i.e. peer-to-peer and video if that Wired article is correct. Please advise if you think there are severe limitations to the OpenDNS approach. What I Need Note that OpenDNS focuses on things that are "objectionable" in some context or another. Video, music, radio and peer-to-peer all get covered. I still need to cover a number of perfectly reasonable things that we just want to block because they aren't needed in a meeting. Most of these are utilities that upload or download legit things in the background. Specifically, I'd like to know port numbers or DNS names to filter in order to effectively disable the following services: Microsoft automatic updates Apple automatic updates Adobe automatic updates Google automatic updates Other major software update services Major virus/malware/security signature updates Major background backup services Other services that run in the background and can eat lots of bandwidth I also would like any other suggestions you might have that would be applicable. Sorry to be so verbose, but I find it helps to be very, very clear on questions of this nature, and I already have half a solution with the OpenDNS thing.

    Read the article

  • Windows 8 with LiveID login authenticates as Guest to remote SQl Server

    - by Tim Long
    I have a network where several users are using Office Accounting 2009 in multi-user client/server mode. OA is built on SQL Server. One PC acts as the 'server' and has the SQl Server instance, the others have only the application installed and no SQL instance, all of the apps connect remotely to the SQL instance on the 'server'. I'm using the term 'server' loosely here, it is just a normal workstation that happens to be designated as the server and runs the SQL instance. There is no NT domain, all user accounts are local accounts. The way that OA works in multi-user mode is that each user is required to have a local account with the same username and password on both the client and 'server' PCs. This has been working well, no along comes Windows 8. I use my 'Microsoft Account' aka LiveID to log into Windows 8. Office Accounting runs fine and attempts to connect to the database, but fails, 'you do not have permission to perform this operation'. In the SQL logs, I get this error: 2012-10-28 17:54:01.32 Logon Error: 18456, Severity: 14, State: 11. 2012-10-28 17:54:01.32 Logon Login failed for user 'SERVER\Guest'. Reason: Token-based server access validation failed with an infrastructure SERVER is the hostname of the server. So it seems to be authenticating as 'Guest'?? To verify this, I enabled the Guest account on the 'server' PC and then added Guest as an allowed user within Office Accounting (this simply creates the user in SQL and gives it an appropriate database role). Sure enough, My Windows 8 PC was then able to connect to the database when using Office Accounting. Clearly, having users authenticate as 'Guest' stinks from a security and auditing standpoint. So what I need are some ideas for how to work around this. I've tried switching the Windows 8 PC to a 'local account' and that works too, but requires giving up significant functionality on the Windows 8 PC. What I really need is a way to force the Windows 8 PC to use a specific set of credentials when connecting to the remote SQL instance. Office Accounting takes the logged in username, which is my LiveID and doesn't correspond to any Windows user name. Anyone solved this issue?

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • SLI Video Cards

    - by Scott
    I'm configuring a new desktop. I'm looking through all PCI Express 2.0 x16 video cards on NewEgg at the moment. All cards seem to be stating that they have SLI Support or CrossFireX support. I know that these 2 technologies enable you to connect more than one video card together to create more of a powerful video experience. These cards that have this support seem to be more expensive that other cards though. When I did a NewEgg search for cards that did NOT have these two supports, no matches came up for PCI Express 2.0 x16 (haven't checked other interfaces yet though). Can you run an SLI/CrossFireX card as a single card (and if you can is it efficient or even good?), or do you need to have more than one card to run them? Also, do they make cards without these two technologies anymore, and are they even cheaper at this point? I really have no need for SLI/CrossFireX at all, so I was wondering what I should look into. Do cards without this technology not run on the PCI Express 2.0 x16 interface? Thanks ahead of time for all your help.

    Read the article

  • How to move my data from my old MacBook Pro to my new one?

    - by Tim Büthe
    I just purchased a new MacBook Pro and already got an 2008 model. I wonder how I move all my data over to the new one. My first idea was, to use my Time Machine backup and restore from it, which seems to be a good idea and should work just fine regarding to this link: http://blog.duncandavidson.com/2008/01/restoring-from-time-machine.html. But, since my current MacBook got older Software on it, like iLife '08 instead of iLife '09 I would have to upgrade this afterwards. Is this correct, or does Time Machine does some magic to exclude well known software? And is it possible to reinstall or upgrade iLife with the included installation DVDs? My second idea is, to just swap the hard drives instead of using the Time machine backup. If it is not too complicated to remove the hdd, this should be the fastest way. This also has the benefit, that the 2008er MacBook then contains a brand new installation and I don't have to remove all my stuff or reinstall Mac OS before I give it away. My question on that second idea would be: does snow leopard handle this stuff correctly? I reboot with the new hardware and all just works fine? So in a nutshell: What would you do: restore from backup or swap drives? And what about the new software?

    Read the article

  • using iptables to change a destination port but keep the ip the same.

    - by Scott Chamberlain
    I am playing around with transparent proxies, The current way I am doing things is the program makes a request to a computer on port 80, I use iptables -t nat -A OUTPUT -p tcp --destination-port 80 -j REDIRECT --to-port 1234 to redirect to my proxy that I am playing with. the proxy will send out a request to port 81 (as all outbound port 80 are being fed back in to the proxy so I want to do something like iptables -t nat -A OUTPUT -p tcp --destination-port 81 -j DNAT --to-destination xxxx:80 The problem lies with the xxxx part. How do I change the destination port without changing changing the destination ip? Or am I doing this setup completely wrong, I am learning after all and constructive criticism is definitely appreciated.

    Read the article

  • Split MPEG video from command line?

    - by Tim
    I have a homemade DVD that I'm effectively trying to insert chapters into and rearrange - the original author burned it as one long chapter, and I'd like to rip it into smaller pieces and re-encode it into a new DVD. I ripped the DVD with the following command: mplayer dvd:// -dvd-device /dev/sr2 -dumpstream -dumpfile raw.vob I'm running Gentoo Linux with mplayer version 1.0-rc2_p20090731 (the latest available in Portage). I have a list of times that the chapters are supposed to span (for example 30:11-33:25), so my first thought was to rip the entire DVD and use mpgtx to cut out certain pieces of the file. My issue is that running mpgtx -i on the file reports quite a few timestamp jumps: Time stamps jumped from 59.753789 to 0.001622 at position 1d29800 Time stamps jumped from 204963823030450.343750 to 31.165900 at position 2d4f800 Time stamps jumped from 60.077878 to 0.001622 at position 43cc000 Time stamps jumped from 60.024233 to 0.001622 at position 65c5000 Time stamps jumped from 204963823068631.718750 to 52.549244 at position 7fd1000 I've tried to fix the indexes using: mencoder raw.vob -oac copy -ovc copy -forceidx -o fixed.vob -of mpeg But mpgtx will still report timestamp issues. My immediate question: is there a way to take the ripped movie I have and correct its timestamps so I can cut it with mpgtx? If I can get that one issue out of the way, building the rest of the DVD will be smooth sailing. If it's not possible to fix the timestamps on this file: is there a better way to rip small chunks of the DVD into separate files for recompilation later? I'd very much like this to be done on Linux, and it'd be even better if I could script it somehow (feed in a list of start and end positions, or start times and durations, and get out a series of ripped files). If need be, I also have a Mac OS X machine available, but no Windows. Edit: I wound up finding another solution involving HandBrake and ffmpeg (with help from this question), but the question stands. Edit again: Turns out my other solution didn't quite work - the audio desynchronized by about five seconds, in about half of my cut mpgs - so I'm back to square one. Anyone?

    Read the article

  • Applying the Windows Experience Index to Servers

    - by Scott
    I finally convinced upper management that we need a computer replacement plan, and I've been tasked with making an inventory of what we have and determining what needs to be replaced this year, next year, the year after, etc. I had to use some sort of criteria to back up my recommendations, so I decided to try using the Windows Experience Index. I've determined the CPU and Memory scores for all of our desktops and servers using community data. I also feel fairly successful in assigning a WEI score to each user based on their computing needs. I'm struggling with assigning a WEI score to the various servers that we have: file server, database server, Exchange server, backup server (for doing backups), web server. Suggestions would be appreciated.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Export SSL Cert from IIS and import into GlassFish keystore

    - by Tim H
    What I need: I have an existing SSL certificate installed on IIS 6. On the same machine, I have GlassFish installed and would like to share the same certificate since they both share the same hostname, and they use different ports: IIS uses 443 and GlassFish uses 8181. Why I need it: Reuse existing SSL certs from IIS to GlassFish. I imagine that this is possible. I am able to install an SSL cert into GlassFish's keystore, and then import the same exact cert into IIS. I just want to go the other way - imagine having an SSL cert on IIS being used for months, and now I want to enable SSL on GlassFish. What I have done: Created a keystore with an alias: server.hostname.com Imported intermediate CA certs associated with the existing SSL Cert Imported the existing SSL cert with the same alias: server.hostname.com, but the keytool won’t allow this, as it is not associated: keytool error: java.lang.Exception: Public keys in reply and keystore don't match Why? Using a different alias causes the cert to not be trusted in the CA chain.

    Read the article

  • How can I disable Control-W in Windows XP?

    - by Tim Visher
    I'm an Emacs/Mac user trapped on Windows at work and I often type Control-W from muscle memory to delete a word and thus by mistake kill the entire window, including everything that I was working on. This a particularly egregious problem for Console2 as I run GNU Screen and am often doing many things at once. Is there any way to completely disable Control-W or remap it to something that is far harder to type? Thanks!

    Read the article

  • Diagnosing Microsoft SQL Server error 9001: The log for the database is not available.

    - by Scott Mitchell
    Over the weekend a website I run stopped functioning, recording the following error in the Event Viewer each time a request is made to the website: Event ID: 9001 The log for database 'database name' is not available. Check the event log for related error messages. Resolve any errors and restart the database. The website is hosted on a dedicated server, so I am able to RDP into the server and poke around. The LDF file for the database exists in the C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA folder, but attempting to do any work with the database from Management Studio results in a dialog box reporting the same error - 9001: The log for database is not available... This is the first time I've received this error, and I've been hosting this site (and others) on this dedicated web server for over two years now. It is my understanding that this error indicates a corrupt log file. I was able to get the website back online by Detaching the database and then restoring a backup from a couple days ago, but my concern is that this error is indicative of a more sinister problem, namely a hard drive failure. I emailed support at the web hosting company and this was their reply: There doesn't appear to be any other indications of the cause in the Event Log, so it's possible that the log was corrupted. Currently the memory's resources is at 87%, which also may have an impact but is unlikely. Can the log just "become corrupted?" My question: What are the next steps I should take to diagnose this problem? How can I determine if this is, indeed, a hardware problem? And if it is, are there any options beyond replacing the disk? Thanks

    Read the article

  • SSD causing 100% CPU usage in Apache/PHP

    - by Tim Reynolds
    I wanted to increase the performance on my development laptop so I added an Intel 320 Series SSD as my primary drive. Everything is amazingly fast, as expected, except Apache/PHP. I develop Magento by using an Ubuntu 10.10 virtual machine. Information: Host OS: Win 7 Professional 64bit Guest OS: Ubuntu 10.10 32bit Processor: i7 Chipset QM55 SSD: Intel 320 Series 160gb 30% full HDD: Hitachi 320gb 50% full (in side bay using an adapter) Laptop: Lenovo T510 Using: Shared folders Apache Version: 2.2.16 PHP Version: 5.3.3-1 APC Version: 3.1.3p1 APC Memory: 128M Using tmpfs for cache, log, session directories in Magento In the VM running on the SSD (VM files and source files are on the same drive) loading a product page in the Admin takes on average 26.2 seconds and uses 100% CPU for nearly the entire time. In the VM running on the old HDD loading the same page takes on average 4.4 seconds. It mostly uses around 40-50% of the CPU while rendering the page. I have read this post: Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)? It says to change some settings in the bios. I have turned any and all power management features off in the bios. I can't for the life of me understand why this would be happening.

    Read the article

  • Is it possible to integrate Computer Associates SiteMinder with SQLServer?

    - by Scott Weinstein
    We have a MS SQL BI stack which the standards group wants us to move to "WebSSO" which is based on Computer Associates SiteMinder/netegrity product. I figure integrating the web component won't be too hard, but we have users which connect to the Database directly - currently using Windows Authentication. Is is possible to itegrate Computer Associates SiteMinder with SQL Server? With SSAS? If so, how much effort is involved?

    Read the article

  • ImageMagic convert

    - by Tim
    convert is not working on the Linux server I am using $ convert exploss_stumps.jpg exploss_stumps.eps convert: missing an image filename `exploss_stumps.eps' @ convert.c/ConvertImageCommand/2838 Any idea why?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >