Search Results

Search found 4001 results on 161 pages for 'operating'.

Page 130/161 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • INACCESSIBLE_BOOT_DEVICE after installing Linux on same drive

    - by kdgregory
    History: My PC was configured with two drives: an 80G on IDE 0 Primary that was running Win2K, and a 320G on IDE 0 Secondary that was running Linux (Ubuntu). I decided to pull the 80Gb drive out of the system, so dd'd the entire 80 G drive (/dev/sda) onto the 320 (/dev/sdb) -- this included the MBR and partition table. Then I pulled the drive, plugged the 320 into IDE 0 Primary, and rebooted. The Windows partition worked at this point. Then I installed Ubuntu into the remaining space on the 320. It works. However, when I try to boot into Windows, I get a BSOD with the following message: *** STOP: 0x0000007B (0x89055030,0xC000014F,0x00000000,0x00000000) INACCESSILE_BOOT_DEVICE Before the BSOD I see the Win2K splash screen, and it claims to be "starting windows" for a couple of seconds -- so it appears that the first stage boot loader is working as expected. Ditto when I try booting in Safe Mode. After reading the Microsoft KB article, I booted into the recovery console and tried running chkdsk /r. It refused to run, claiming that the drive was corrupted (sorry, didn't write down the exact error message). However, I can mount the drive from Linux, and access all files. And for what it's worth, I can scan the drive using the Linux "Disk Utility" (this is Ubuntu, the menus don't show real program names), it claims the drive to be clean. The KB article mentioned that boot.ini could be the problem, so here it is: timeout=10 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Professional" /fastdetect Any pointers on what to do next?

    Read the article

  • DFS Root namespace is RDWR for all users

    - by Patrick
    We have an existing DFS Replication and Namespace group that we use to serve the company's files. This has been operating fine for us for some time now, and continues to do so. however a situation arose yesterday afternoon that has led us to be stumped. The problem is that we have our name space presented as : \\domain.co.uk\public\[8 or 9 folders that are mapped to the users in the business] We had a problem this morning that meant that a number of users started mapping their AD Home Drive directly to the \\domain.co.uk\public directory and we found that they had read/write. This rapidly became a problem as a at least one director saved some moderately sensitive documents in there and basically anyone could read them. I've tidied up that specific problem with some deft scripting and a slight modification of group policy. However I would like to make \public read only, the trouble is I can't work out where the ACLs for that folder would be held. All the folders that are presented as \\domain.co.uk\public\[folder] are 'real' folders on logical volumes on our DFS servers so are secured with groups that are applied via the 'security' tab. I'd like to do the same on \public but I can't find it. I have looked through amongst other things \Sysvol\domain.co.uk but can't find it and after a lot of clicking and a bit of reading I can't see how to lock it down. Any thoughts?

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

  • What does the 'Burst Rate' stat mean in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks

    Read the article

  • Diagnosing RAM issues

    - by TaylorND
    I have an old Acer Aspire T180 desktop. The specs are as follows: AMD Athlon 64 3800+ 2.4GHz 1GB DDR2 SDRAM 160GB DVD-Writer (DVD±R/±RW) Gigabit Ethernet 17" Active Matrix TFT Color LCD Windows Vista Home Basic Mini-tower AST180-UA381B According to the information in the computer's documentation the computer comes with 1 GB of RAM. It has two DDR2 SDRAM sticks. I used to have Windows Vista installed. Then I removed it and install Windows 7, and now I have since removed Windows 7 and installed Windows XP. According to Windows XP with both RAM sticks in the computer has 768 MB. Isn't this supposed to be 1 GB of RAM or 1024 MB of RAM? Is the amount of RAM installed only partly used by the Operating System? Is there's something I'm missing? If I remove either one of the RAM sticks I'm left with 448 MB of RAM. These numbers don't seem to add up. If each of the RAM sticks contains at least 448 MB of RAM shouldn't they (both being in) provide 896 MB of RAM. Even then, isn't that less than a GB of RAM? I'm not too experienced in hardware so I thought this would be the best place to ask. As a follow up question, is the RAM I have enough to run/multitask with Windows XP efficiently? I plan to do a lot of computing with the system (although not gaming), should I invest in more RAM?

    Read the article

  • How to modify a message, so it will be for 100% recognizable as spam by Exchange junk e-mail filter

    - by user71061
    Hi! I have an sendmail server, sitting in front of my Exchange server. This server filter spam with SpamAssassin (and do it incredibly well!), but it merely tag spam messages with appropriate header flags and by modifying message subject. When such a message arrives to user mailbox on Exchange server, where it is examined by Echange/Outlook junk e-mail filter, which put most of spam in junk message folder. And that is my problem: most, but not all! To put all spam in junk e-mail message folder, user has to define an rule, saying f.e: "If header contains text 'X-Spam-Flag: YES' then move it to 'Junk e-mail messages' folder". Fine, but it has to be done on every user (for some users, this task is too "complicated" to made it themselves :-) . So I want to know, how could I modify message header in such a way, that Exchange junk e-mail filter will for 100% recognize this message as a spam, freeing user from task of defining his own rule. Some solution could be defining such a rule by using AD and group policy, but I wan't to avoid this due to many possible caveats: there are so many combination of different operating system and different Outlook versions, and to be honest, I doubt if it is even possible.

    Read the article

  • MysqlTunner and query_cache_size dilemma

    - by wbad
    On a busy mysql server MySQLTuner 1.2.0 always recommends to add query_cache_size no matter how I increase the value (I tried up to 512MB). On the other hand it warns that : Increasing the query_cache size over 128M may reduce performance Here are the last results: >> MySQLTuner 1.2.0 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.5.25-1~dotdeb.0-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 6G (Tables: 195) [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17) [!!] Total fragmented tables: 51 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 19h 17m 8s (254M q [1K qps], 5M conn, TX: 139B, RX: 32B) [--] Reads / Writes: 89% / 11% [--] Total buffers: 24.2G global + 92.2M per thread (1200 max threads) [!!] Maximum possible memory usage: 132.2G (139% of installed RAM) [OK] Slow queries: 0% (2K/254M) [OK] Highest usage of available connections: 32% (391/1200) [OK] Key buffer size / total MyISAM indexes: 128.0M/92.0K [OK] Key buffer hit rate: 100.0% (8B cached / 0 reads) [OK] Query cache efficiency: 79.9% (181M cached / 226M selects) [!!] Query cache prunes per day: 1033203 [OK] Sorts requiring temporary tables: 0% (341 temp sorts / 4M sorts) [OK] Temporary tables created on disk: 14% (760K on disk / 5M total) [OK] Thread cache hit rate: 99% (676 created / 5M connections) [OK] Table cache hit rate: 22% (1K open / 8K opened) [OK] Open file limit used: 0% (49/13K) [OK] Table locks acquired immediately: 99% (64M immediate / 64M locks) [OK] InnoDB data size / buffer pool: 6.1G/19.5G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Increasing the query_cache size over 128M may reduce performance Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 192M) [see warning above] The server has 76GB ram and dual E5-2650. The load is usually below 2. I appreciate your hints to interpret the recommendation and optimize the database configs.

    Read the article

  • Deployment and monitoring tools for java/tomcat/linux environment

    - by Ran
    I'm a developer for many years, but don't have tons of experience in ops, so apology if this is a newbe question. In my company we run a web service written in Java mainly based on a Tomcat web server. We have two datacenters with about 10 hosts each. Hosts are of several types: Dababase, Tomcats, some offline java processes, memcached servers. All hosts are Linux CentOS Up until now, when releasing a new version to production we've been using a set of inhouse shell script that copy jars/wars and restart the tomcats. The company has gotten bigger so it has become more and more difficult operating all this and taking code from development, through QA, staging and to production. A typical release many times involves human errors that cost us precious uptime. Sometimes we need to revert to last known good and this isn't easy to say the least... We're looking for a tool, a framework, a solution that would provide the following: Supports the given list of technology (java, tomcat, linux etc) Provides easy deployment through different stages, including QA and production Provides configuration management. E.g. setting server properties (what's the connection URL of each host etc), server.xml or context configuration etc Monitoring. If we can get monitoring in the same package, that'll be nice. If not, then yet another tool we can use to monitor our servers. Preferably, open source with tons of documentation ;) Can anyone share their experience? Suggest a few tools? Thanks!

    Read the article

  • Are there tools available for trimming PDF margins?

    - by Charles Duffy
    I have an ebook I'm trying to read in PDF format on a Kindle. Unfortunately, the page headers and footers have some content (page number and copyright info, respectively) preventing the device from scaling the actual text to match its usable area viewing area, thus leaving the actual content too small to read. Various tools are available which will trim off whitespace, but the Kindle already does this; my goal, by contrast, is to remove printed matter outside of a defined bounding box, and the only tool I've found for the purpose is moderately expensive commercial software. I could probably generate a mask in Inkscape; split out the individual pages using pdftk, apply the mask to each page individually (outputting to postscript), and recombine the numerous postscript files into a single PDF. However, this decode/reencode steps would be pretty unfortunate in terms of document size; something able to operate with a bit more finesse would be ideal. I have all major operating systems handy (Windows, several modern Linux distros, a Mac, etc) so solutions don't need to be constrained by platform. Suggestions? (I've reported the issue to the author, who mentioned it to his editor, who hasn't done anything about the issue over the course of more than a month, making the zero-work approach evidently nonproductive).

    Read the article

  • How to Monitor Network in Medium-Sized Company?

    - by Kyle Lowry
    I work at a medium sized company (100+ employees). An issue that has been cropping up is network performance, internet access in particular. We have about 70 or more computers, a mix of Mac OS X and Windows XP & 7 machines. We have several servers (Exchange server, PC file servers, MS SQL, Blackberry, FTP, Mac server, etc). There are four main switches, a SonicWall firewall, and probably a couple routers in the server room with a dozen or so more scattered around the building. The network structure has grown organically over a number of years; and, as far as I know, there really isn't a monitoring solution in place. When we experience network issues (slow connections, dropped packets, and so on), our general solution is to power cycle some hardware or go around to each employee and ask them if they are uploading/downloading any large files. This is really inefficient and time consuming, and it does not allow us to monitor the network, tackling potential problems proactively. I would like to find a solution that would allow me to monitor network usage company-wide in real time, with detail going down to the individual computer, ideally. Given the hodgepodge of equipment and operating systems, what would be the best way to set up some kind of monitoring solution? Hardware, software, restructuring our network architecture?

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • Games on windows 8 in bootcamp lag even on lowest graphics

    - by Jackson Gariety
    I've been playing Crysis 2 and Skyrim on my Retina MacBookPro (10,1) for months now. The two games used to run super smoothly even on nearly maxed out settings. This laptop has an Nvidia GeForce GT 650M graphics card inside, it runs great. But I recently replaced my Windows 8 consumer preview with the retail copy, and since then, 3D games lag in this odd way, no matter what the graphics settings. Every second Skyrim and Crysis alternates between running smoothly and lagging. It's a cyclical lag that comes and goes like clockwork. I can turn the graphics down to 800x600 with no antialiasing and low texture quality, and it runs much smoother on the "up" motion of the cycle, but every second it moves back into this lag spike. I've tried installing beta graphics drivers, re installing the operating system, re installing the bootcamp support software, and freeing up space (I have about 20 GB free). I can't figure out what suddenly caused this other than some obscure difference between the consumer preview and the retail version. What can I try? Is my video card failing? Are there some other drivers I can install? This isn't normal lag from maxing out the card, it

    Read the article

  • Truecrypt and hidden volumes

    - by user51166
    I would like to know the opinion of some users using (or not) the hidden volume encryption feature of Truecrypt. Personally until now I never used this feature: on Windows I encrypt the system drive as a standard volume, on GNU/Linux I encrypt using LUKS which is Truecrypt's equivalent to standard volume. As for data I use the standard volume approach as well. I read that this feature is nice and all, but it isn't really used by most people. Do you use it or not? Why? Do you only store inside it VERY sensible data or what else? Because technically speaking doing a hidden volume which has (almost) the same size as the outer one doesn't make sense: the outer volume will be encrypted but no data will be on it, which will appear very strange. So not only one has to plan which data store where, but has even to remember each time to mount the outer volume with hidden volume protection (otherwise there'll be a data loss when writing to it). It's a bit messy: hidden OS + outer OS + outer volume + hidden volume = 4 partitions :( Similar question about the hidden operating system (which I don't use [yet]).

    Read the article

  • Managed LAMP platform for maximizing availability and global reach, not scalability

    - by user66819
    Assume a Linux/Apache/MySQL/PHP application for a small base of registered users. With small userbase, there are no traffic peaks so the scalability that cloud platforms offer is not imperative. But the system is mission-critical, so availability is the primary goal. Users are also distributed across Asia, Europe, and US, so multiple server locations that minimize users' network hops would be highly desirable. The dream: a managed VPS platform where we would configure a single server (uploading PHP and other files, manipulating database, etc.), and the platform would automatically mirror the server in a handful of key places around the world (say one on each US coast, one in Europe, one in east Asia). File system synchronization and MySQL replication would happen automatically. Core operating system is managed, so we don't need to do full system administration and security, and low-level backups are also done by service provider, though we also do our own backups as well. Couple this with some sort of DNS geo-detection, so users are routed to the nearest operational server... with support for https, of course. Does such a dream exist? If not, what are some approaches to accomplish the same end with minimal time investment and minimal monthly hosting costs?

    Read the article

  • Windows 7: moved system partition, need to update boot partition

    - by Actorclavilis
    So, I have a decently standard Windows7/Ubuntu dual-boot setup, and (since Ubuntu is my usual operating system) I found I needed to grow my Ubuntu partition and shrink my W7 partition. Originally, my system (500G) looked like this: W7 Boot Partition (1.5G) Ubuntu (around 240G) W7 (same as Ubuntu) (on an extended partition, all by itself) Swap (rest of disk, around 16G) Now I'm no stranger to partitioning and filesystem tools, especially GParted, which I used on a Linux boot disk. After my partition editing, the partitions are laid out the same, except the Ubuntu partition is now 407G and the W7 partition is smaller to compensate. I had supposed, based on http://www.gparted.org/faq.php, that I would be able to run the W7 install disk in recovery mode and have it deal with the rearrangement, then possibly reinstall GRUB or something. Well, now the W7 install disk doesn't even see my W7 installation. All my files are there, the NTFS is perfectly clean, no problems there, but the install disk won't notice it. (Of course, the GRUB entry works fine but the W7 boot partition (which I didn't change) refuses to boot it.) So, basically, any ideas on how to fix this? I don't especially want to rerun the entire install procedure because I'll have a bunch of programs to reinstall (never mind redoing GRUB), but I fear that might be the only option. Thanks.

    Read the article

  • Grub Installation Failed: Fatal Error ... now what I do?

    - by eklavya
    I know there are some threads that touch this but I feel I have done something uniquely stupid. hence the post and plea for help. I am a beginner @ Linux. So I have a PC with a HDD (hard disk drive) and SSD (solid state drive) It was running Linux Mint /dev/sda1 - HDD Partition 1 - 2 TB (mounted this is /home /dev/sda2 - HDD Partition 2 - 1 TB (separate back up drive, i was backing up files to this) /dev/sdb1 - SSD Partition 1 - 100 GB (OS) /dev/sdb2 - SSD Partition 2 - 20 GB (Swap) The operating system was Linux Mint and was installed on the /dev/sdb1 i.e the solid state drive. I had partitioned off the sda into 2 TB and 1TB and presented the 2 TB as the /home to the OS. Anyway last night I decided to make a return to Ubuntu via the path of Elementary OS. Everything went fine with the install until it stated that GRUB installed failed and this was a Fatal error (no kidding I said). No I am stuck. I have definitely done something wrong and don't know what it is... My biggest pain is the files on the /dev/sda2. I want to save these before I try something drastic like wiping off the /dev/sda completely. So I have the following questions... Can I use a liveCD USB to save these files ? I can see the /dev/sda2 but was unable to access the files in the liveCD last not least ... how do I fix the main issue here. Why could the OS not install GRUB 2b... why is my SSD the /dev/sdb ... and not /dev/sda. Does that have something to do with it that my master boot record sits on the HDD /dev/sda and not /dev/sdb

    Read the article

  • Clean installation of RHEL 5.5 claims package "desktops" is missing

    - by TKguru42
    Hi all, I'm a student worker in the CS department of my university, so please forgive me for any unprofessional descriptions. Simplified explanations are appreciated. I recently replaced some bad graphics cards in a few public workstations. The machines are all the same model. Before putting them back on the network I did fresh installs of RHEL---first I tried 5.4, but yum update ran into all sorts of ugly dependency errors and if I tried to remove any of the problematic packages, the whole operating system FUBAR'd. Using RHEL 5.5 gave me the same errors during install saying that "java.1.5.1-sun*" and "desktops" were missing, but yum update didn't have any dependency problems. Now that I tried logging in through the GUI, I encounter no GUI past the standard RHEL login page. The desktop is a uniform light teal and there's no system tray. An xclock window and an xterm window are open, and Firefox opens automatically, but that's it. Nothing else. What's REALLY confusing is that the computer claims that gnome is already installed, except it clearly isn't working. Any help or advice is greatly appreciated. If it helps, our department uses kickstart to run our standard Linux installs. I can try to get the script if that would be of use. Thank you!

    Read the article

  • "Windows failed to start" loop with 0xc0000225. No install discs, EasyRE/USB iso hasn't worked

    - by mvidaure
    I've been suffering from this "Windows failed to start" loop with 0xc0000225 for 3 days now and I still can't fix it. The major problem is that I don't have any sort of installation disc. However, I have tried EasyRE via both CD and USB but both result in the same problem.  I try to perform an 'Automated Repair' on my computer and I get in red text "The selected partition is corrupted and could not be accessed or repaired. Please select a different drive to continue." It is also labeled as NO under Active. Since I do not have a the installation discs, I made a USB with a Windows_7_Recovery_Disc  iso (as shown here http://www.sevenforums.com/tutorials/31541-windows-7-usb-dvd-download-tool.html) but it also doesn't work. I get a blue screen that says "RECOVERY You pc needs to be repaired. The application or operating system could not be uploaded because a required file is missing or contains errors... File:\WINDOWS\system32\winload.efi Error code: 0xc0000225 You'll need to use the recovery tools on your installation media. If you don't have any installation media, contact your system administrator or PC manufacturer." Thanks in advance! Miguel

    Read the article

  • Static DHCP binding

    - by Alex
    Good time of day, SF people. I have created a manual DHCP binding entry on a Cisco router so that a client would always get leased to it. The clients wants to get the same address on both of his dual-boot linux systems. He tries to get an IP address leased and he succeeds on one of the dual-boot operating systems. When he reboots to another one he gets a lease for a completely different one. I don't get it. The MAC addresses are the same (we checked in ifconfig, so what could be happening here? Why is the router confused? Or is it something else? Also, how can I check DHCP server IP address who I have got an IP address from (on Linux)? Configuration on Cisco: ip dhcp pool MANUAL_BINDING0001 host 192.168.0.64 255.255.255.0 hardware-address dead.beef.1337 dns-server 192.168.8.11 default-router 192.168.0.254 domain-name verynicedomainigothere.cn PS. Is it mandatory to use client-name configuration line?

    Read the article

  • How to "open" existing VMs in Hyper-V without importing them?

    - by Borek
    I had a PC with two physical disks: C: containing the host operating system D: containing a folder D:\VMs where all my virtual machines were stored Now, the C: disk died. I bought a new one, reinstalled Windows on it, enabled Hyper-V feature and now I just need to open the VMs from the D:\VMs folder. However, I don't seem to be able to find a menu item or anything that would allow me to do that - the only thing I see is the "import" command which unfortunately requires the VMs to be explicitly exported (my machines weren't). I firmly believe that when I have all the files constituting a VM (the VHD file, some XML files describing the settings etc.) it must be somehow possible to just "open" these existing VMs in Hyper-V, right? What command am I missing? Edit: I know I can create a blank virtual machines and then just point them to use existing VHDs. However, I am not sure about all the different settings I've made to those VMs so I hope there's a way to simply open those existing VMs instead of recreating them.

    Read the article

  • Fedora Core 6 Migration

    - by Matthew Sprankle
    I am at a loss as to what I should to for this server. I need it to run php5.3 and corresponding version of mysql. I received a client today through work that is using Fedora core 6 running 10 very small websites on some very hodge podge setup. My original idea was just upgrade to php5.3. I have yum (installed 3.0.8) reconfigured for the fedora archive. The latest version of php it allows is 5.1.8. I am still relatively new to server setups and am nervous about wiping their server to upgrade it. Since it is about 6-8 years old I'm not sure if it will even support the newest version of fedora. The server specs are: Parallels Plesk Panel version 9.5.4 Operating system Linux 2.6.9-023stab048.4-smp CPU GenuineIntel, Intel(R) Xeon(R)CPU E5335 @ 2.00GHz (10gb disk space and 1gb of memory). I use fedora for my personal server so I was a little familiar with it. I haven't done anything too extravagant. Is there a way I can escape this nightmare with installing php5.3 or do I need to migrate these sites to a new server?

    Read the article

  • Eee PC 1015BX ram compatibility?

    - by AdrianaMX
    Asus Eee PC 1015BX Operating System Windows 7 Starter, 32bit CPU AMD Fusion APU C60 1.0GHz (dual core) Processor Graphic AMD Radeon HD 6290 (256 MB Shared) Memory DDR3, 1 x SO-DIMM, 1GB I have upgraded the preloaded "Windows 7 Starter" to "Windows 7 Professional" I want to upgrade the ram, from 1gb (factory) to 4 gb. What should i buy? SODDR3, 4GB, 1066MHZ, PC3-8500, 204PIN? or SODDR3, 4GB, 1333MHZ, PC3-10666, 204PIN? I already know that Windows 7 32-bits can't handle 4gb, only 3gb (but 3gb is better than one stick of 2gb). ASUS send me this link, but i think they are wrong, (or Insufficient Information for me) http://www.kingston.com/us/memory/search/Default.aspx?DeviceType=3&Mfr=ASU&Line=Eee%20PC&Model=71404 Thank you. CPU-Z Chipset Memory Type DDR3 Memory Size 750 MBytes Memory Frequency 532.2 MHz (3:16) CAS# latency (CL) 7.0 RAS# to CAS# delay (tRCD) 7 RAS# Precharge (tRP) 7 Cycle Time (tRAS) 20 Bank Cycle Time (tRC) 27 Memory SPD NO INFO AIDA64 North bridge Properties North bridge AMD K14 IMC Supported Memory Types DDR3-800, DDR3-1066 SDRAM Memory Slots DRAM Slot #1 1 GB (DDR3 SDRAM) Integrated Graphics Controller Graphics Controller Type AMD Radeon HD 6290 (Wrestler) Graphics Controller Status Enabled Graphics Frame Buffer Size 256 MB

    Read the article

  • Replacing Failing RAID 1 Drive

    - by mrduclaw
    I hope this is a simple question, but I simply don't know anything about RAID. Some time ago I received a machine that, as I understand it, has two drives in it under RAID 1 (or so that one drive is mirrored on the other and appears as just 1 drive to the OS). Recently, one of these drives has started marking a clicking noise and I would like to replace it. I believe the machine has a hardware RAID controller on the motherboard that handles the RAID stuff, but if it matters the Operating System is Windows XP 32-bit. Is the solution to my problem as simple as buying another drive that is of the same capacity and plugging it in where the clicking drive is currently? Or could I possibly lose everything if the drive that's clicking is the one being mirrored on to the other drive? Is there some menu I need to find before unhooking things? Any best practices out there? I'm sure I'm leaving out some required information, so please just tell me what I'm missing. Thanks!

    Read the article

  • Low 'Burst Rate' from SATA drive in HDTune?

    - by UpTheCreek
    I recently upgraded my laptop's v slow hard drive to a seagate momentus 7200. Everything is working fine, but I'm a bit confused by these benchmark results: The burst rate is significantly less than the Maximim transfer rate, and not much higher than the normal minimum (if you ignore the spikes). What's going on here? On the HDtune website it defines Burst Rate as: ...the highest speed (in megabytes per second) at which data can be transferred from the drive interface (IDE or SCSI for example) to the operating system. Which begs some questions... e.g. if this is the highest, then how did the bechmarking tool record the 103MB/sec maximum? And if this really is the true maximum, then where is the bottleneck? The laptops SATA interface is on an Intel 82801GBM southbridge controller. When I check in hardware manager, I see that it's driver is iaStor.sys from 2005. Maybe that's the issue? I'll look for a newever version, but any insights would be appreciated. Thanks UPDATE: Acorting to this page on the HDTune website... An important parameter of the test is the Burst Rate. This value should always be higher than the maximum transfer rate. A lower value is usually an indication of a configuration problem. So what might be the configuration problem?

    Read the article

  • Is NFS capable of preserving order of operations?

    - by JustJeff
    I have a diskless host 'A', that has a directory NFS mounted on server 'B'. A process on A writes to two files F1 and F2 in that directory, and a process on B monitors these files for changes. Assume that B polls for changes faster than A is expected to make them. Process A seeks the head of the files, writes data, and flushes. Process B seeks the head of the files and does reads. Are there any guarantees about how the order of the changes performed by A will be detected at B? Specifically, if A alternately writes to one file, and then the other, is it reasonable to expect that B will notice alternating changes to F1 and F2? Or could B conceivably detect a series of changes on F1 and then a series on F2? I know there are a lot of assumptions embedded in the question. For instance, I am virtually certain that, even operating on just one file, if A performs 100 operations on the file, B may see a smaller number of changes that give the same result, due to NFS caching some of the actions on A before they are communicated to B. And of course there would be issues with concurrent file access even if NFS weren't involved and both the reading and the writing process were running on the same real file system. The reason I'm even putting the question up here is that it seems like most of the time, the setup described above does detect the changes at B in the same order they are made at A, but that occasionally some events come through in transposed order. So, is it worth trying to make this work? Is there some way to tune NFS to make it work, perhaps cache settings or something? Or is fine-grained behavior like this just too much expect from NFS?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >