Search Results

Search found 14975 results on 599 pages for 'os'.

Page 505/599 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • what is the fastest way to copy all data to a new larger hard drive?

    - by SUPER user
    I was certain this would have been covered before, but I cannot find an answer amongst all the almost-duplicates that come up; sorry if I've missed something obvious. I have a full 320gb disk inside my machine, a new 1tb disk to replace it, and a USB 2.0 chassis. It is only data on a single partition, no OS/apps involved, and the old drive will be kept somewhere as backup (no secure wiping etc). The simple option would be to put new disk in USB chassis, copy files, then swap them over. But for USB pen drives, reading is around 4x faster than writing. If the same is true for a USB SATA chassis (is it?) then it would be significantly faster to swap the drives first and read from the old drive over USB, right? Then the other consideration is that copying lots of files is usually slower than a single file of equivalent size. Is Windows 7 smart enough to do everything in a single lump like that, or is there specialised software that should be used instead? (Even if SATA-SATA copying is faster than involving USB, knowing what to do when it isn't an option is useful information.) Summary: Does a USB SATA chassis suffer from a read/write inequality? (like a USB pen drive does, but unlike a direct SATA connection) Can Windows 7 do sequential access? (I can't find confirmation if Robocopy does this.) Or is it necessary to use a bootable CD/USB with something like Clonezilla to achieve sequential copy speeds?

    Read the article

  • Best photo management software?

    - by Niels Basjes
    Hi, What I would like is a single piece of software (or a smart combination of tools) that allow me to manage my photos in a better way than what I've found so far. 1. Tags Primarily I need a way of tagging the images. So I can manually tag photos the same way we tag questions here at SO/SF/SU. I want this software to place a lot of the tags automagically (obvious things like date and resolution). 2. Face recognition What I would really like is that this software has a feature that it can recognize faces in images and places tags with the name of the person. So far I've only heard of one online photo system that can do that (Picasa) and not yet of any offline tool. 3. Version database I must have some way of having a central GIT/SVN/... that contains all images. I have had a harddrive corruption a few years ago and it took me a long time to figure out which images had been damaged. I always want to be able to go back to what the camera produced. 4. Website I want to be able to generate a website (few 'tag' specific websites) based on the actual content. 5. Easy bulk uploading Many photo tools have a one on one uploading option. I prefer simply 'throwing' my images on a file server under Linux (Samba) and let the system automagically integrate, tag, recognize, etc. all images. Ok, I know these are a bit much. Perhaps you guy's have some suggestions about existing tools that can make this possible. Or even a complete system that does this. EDIT: To clarify on the OS. I prefer Linux for any 'server' task and Windows XP for any 'desktop' task. Thanks for all your input. Niels Basjes

    Read the article

  • `:Zone.Identifier` files keep on appearing in Windows XP virtual machine

    - by Jonathan Reno
    I have a Windows XP Home Edition guest and a Linux Mint 13 host. I use VirtualBox and the ~/Public directory is shared with the guest. It sometimes happens that I use IE on the guest system to download files (until I get a better Windows browser). All of the downloaded files go the the L:\ drive (the ~/Public directory). When they are finished downloading, Windows Explorer adds a :Zone.Identifier file for each file I download. When I extract a downloaded ZIP archive on the guest (on drive L:\), Windows creates a :Zone.Identifier file for every file in the extracted directory. This even occurs if I use the host to move a file to the ~/Public directory. The shared ~/Public directory is on an ext4 partition and the colon character is supposed to be illegal in file names in Windows, but not on the ext4 partition. Is there any way to stop Windows from putting all this rubbish on my filesystem? (I might have to create a shell script to clean up after Windows' act.) Here is what I see in Windows Explorer: By the way, if I were running a Mac OS X host (where colons are illegal file name characters) this would be even more horrendous.

    Read the article

  • Silent install FirePro v4900 Driver on Windows Embedded 7 Standard

    - by Birgit_B
    I'm trying to install the Drivers for a FirePro v4900 on a Windows Embedded 7 Standard 64bit OS. I want the system to be as small as possible, so i would rather not install the whole catalyst control center, but only the necessary drivers. Because the installation should be accomplished absolutely unattended, the installation process of the FirePro-Driver should also be done without any user interaction. I see two possible solutions for the Problem: Install only the Drivers: Is it possible to solely install the necessary drivers? How would i achieve that? This solution would be the preferred one, because of the smaller footprint. Silent custom install the provided "FirePro_8.911.3.3_VistaWin7_X32X64_135673.exe" (found at ATI FirePro™ Driver). Is there a way, to do that? Thank you in advance for your support! Update: I managed to accomplish a silent installation. I extracted the contents of the above mention installer-file and ran \$_OUTDIR\Bin64\Setup.exe -Install. (There are some other Parameters, just run Setup.exe /?). But i couldn't achieve to just install the drivers without the Cataclyst Control Center, and it seams the Control Center has some unfulfilled dependencies and so it crashes...

    Read the article

  • DNS server and fallback outside home

    - by Jens
    I have my own DNS server at home to access local names, and that is working fine. Then I have my laptop, now obviously my laptop leaves the home now and then, therefore it accesses different nets outside my home, and my DNS server is not accessible there... So I figured that I would just add Google as secondary DNS... But actually, when I do that, then suddenly I can't access my local stuff, the page won't resolve (at home that is, obviously), like my laptop is getting a quicker response from Google's DNS or something, because it can't find anything on the addresses I use locally. If I then remove the secondary DNS, and keeps my own, then it works fine again... So do I somehow need to seperate what DNS's to use on what nets? I already use sepperate DNS settings when I connect using my 3G modem, but when I use hotspots it seems to use the same settings regardless (at least in the train), also can it differ wired connections?... Is there another solution? OS: Windows 7 Ultimate, x64 EDIT: Currently trying this "hack/fix" out for the time being: http://blog.johnruiz.com/2011/12/windows-does-not-always-honor-dns-order.html

    Read the article

  • how to authenticate once for multiple servers, using only apache configs?

    - by Wang
    My problem is, I have a number of prepackaged web apps (a print system, a wiki, a bug tracker, an email archive, etc.) running on different Mac OS X Leopard (soon to be SL) servers that each need to authenticate users from the internet at large. Right now every server presents an Apache basic authentication prompt, which takes a shared login, but it's apparently enough of an inconvenience to log in repeatedly that people are sending email without checking the wiki or bug tracker or archive. In the case of the bug tracker, a user [might need to log in twice---once for apache if he hasn't used any other protected service on that server, once for the bug tracker itself so it can distinguish different people. Since the only common component to all these apps is Apache 2 itself, does it have any way of authenticating a user once, in some way that will be respected by other servers and various web apps? Looked at http://serverfault.com/questions/32421/how-is-session-stickiness-achieved-across-multiple-web-servers but it sounds like the answer is assuming that I get to write my own web app. Looked at Ian Bicking's blog but it's four years old and recommends something available only for apache 1.3, not apache 2. Sorry not to hyperlink the second site---apparently I need 10 reputation points. Edit: Shibboleth does what I need, but I should have specified that I'm looking for a really dumb, really simple solution for in-house services that need to handle all of a dozen users, probably not more than three at a time.

    Read the article

  • TFS 2012: Backup Plan Fails with empty log file

    - by Vitor
    I have a Team Foundation Server 2012 installation with Power Tools, and I defined a backup plan using the wizard found in the "Database Backup Tools" in the Team Foundation Server Administration Console. I set the backup plan to do a full database backup on Sunday mornings, to another server in the network. I followed the wizard with no problems and the Backup Plan was set successfully. However when the backup runs it returns Error as result and when I go to the log file I only get the header and no further info: [Info @01:00:01.078] ==================================================================== [Info @01:00:01.078] Team Foundation Server Administration Log [Info @01:00:01.078] Version : 11.0.50727.1 [Info @01:00:01.078] DateTime : 11/25/2012 02:00:01 [Info @01:00:01.078] Type : Full Backup Activity [Info @01:00:01.078] User : <backup user> [Info @01:00:01.078] Machine : <TFS Server> [Info @01:00:01.078] System : Microsoft Windows NT 6.2.9200.0 (AMD64) [Info @01:00:01.078] ==================================================================== I can imagine it's a permission problem, but I have no idea where to start ... Can anyone help? Thank you for your time! EDIT I'm not sure if it is related, but I logged in with "backup user" in "TFS Server" and there was this crash window opened with "TFS Power Tool Shell Extension (TfsComProviderSvr) has stopped working". The full crash log is here: Problem signature: Problem Event Name: APPCRASH Application Name: TfsComProviderSvr.exe Application Version: 11.0.50727.0 Application Timestamp: 5050cd2a Fault Module Name: StackHash_e8da Fault Module Version: 6.2.9200.16420 Fault Module Timestamp: 505aaa82 Exception Code: c0000374 Exception Offset: PCH_72_FROM_ntdll+0x00040DA8 OS Version: 6.2.9200.2.0.0.272.7 Locale ID: 1043 Additional Information 1: e8da Additional Information 2: e8dac447e1089515a72386afa6746972 Additional Information 3: d903 Additional Information 4: d9036f986c69f4492a70e4cf004fb44d Does it help? Thanks everyone!

    Read the article

  • Application for time and projet management

    - by user10826
    I want to improve the way I organize my projects/tasks/schedule What I do now is: keep an excel sheet with the name of the most important tasks/projects, I look at it at the beginning of each day and decide the ones I will focus on on iCal I write down events for each day, or for a concrete time (13 to 14 hours). I set up each day the tasks I want to accomlish, and allocate them hours I use Things (culture code) to keep info about tasks and projects not very important and which are not time allocated yet (GTD name = someday) I use Mail on Mac and create folders for the mails I want to process with the name of the different projects I save the main info for each project on freemind maps My system works well at the moment but it is pretty complicated to use. I want to make it better and I am looking for something with these requirements: must be 100% offline accessable it should use as less programs/resources as possible, ideally just one program should be able to manage all my info I can use the GTD methodology mixed with priorities and I can allocate each task converted to event on my calendar I can have different daily/weekly, etc views on a calendar to see the "big picture" must run on mac os x leopard price does not matter, I will pay for this So, according to your experience, can you recommend me something like this? Thanks

    Read the article

  • Deciphering an IIS6 Httperr log file

    - by smackaysmith
    We have a Windows 2003 R2 SP2 server with iis6 that is creating a 1024kb httperr file every minute. I can't figure out what I'm looking at. Here's a snippet: 2010-03-24 13:15:05 10.53.2.35 1667 10.53.2.12 80 HTTP/1.1 PUT /hserver.dll?&V01|&IMAC=0080646077AB|CID=32|CN=LWT0080646077AB|ED=1|IP=10.53.2.35|SM=255.255.255.0|GW=10.53.2.1|SN=10.53.2.255|DM=logs.com|1D=10.53.2.12|2D=10.101.2.12|0D=1|AL=/usr/sbin/netxserv|AV=4.1.0.0|CP=VIAüEstherüprocessorüü800MHz|CPS=800|RM=190512|B1=1.18|PD2=1024x768x16ü@ü60Hz|IM=6.6.2-02|CI=3600|SN#=6KHDG301300|OS=23|VI=1|P1=24|TZO=-301|TZ=CDT|FS=128|MD=2003-04|CO=|LO=|AP0=BaseüSystem|NA|6.6.2-02|AP1=RapportüAgent|NA|4.1.0-3.26|AP2=TrueType|NA|6.8.0-3.4|AP3=WebFonts|NA|2.0.4-3.6|AP4=TrueTypeüFonts|NA|6.8.0-3.5|AP5=Network_login|NA|1.0.0-1.0.3|AP6=ScreenüSaver|NA|3.13|AP7=DMonitor|NA|1.0.0-0.4.0|AP8=MozillaüFirefox_15|NA|1.5.0.8-3.6|AP9=RemoteüShadow|NA|3.17|AP10=RemoteüDesktop|NA|1.6.0-1.0|AP11=SNMP|NA|5.1.3.1-3.13|AP12=LinuxüPrinting|NA|3.8.27-3.33|AP13=SSH|NA|3.8.1-3.25|AP14=ThinPrint|NA|6.2.87-0.2|AP15=XDMCP|NA|6.8.0-3.29|AP16=Ericom|NA|8.2.0-3.29|AP17=Daylightüsavingütimeüupdate|NA|1.1.0-1.0.0| 411 - LengthRequired - What on earth am I looking at? Nothing in the system or app logs. Finally, in iis manager, Default Web Site label has boxes instead of spaces. Very odd.

    Read the article

  • Desktop virtualization

    - by gurpal2000
    Is there currently a proper Type 1 "desktop" hypervisor? Either free or not? This is just for tinkering around at home on some beefy Phenom machines. Basically i want to be able to run say 2 OSs on the same PC but without loading windows or a heavy flavour of linux and then use a hotkey to switch between them. I should get full performance out of them. So do i need something better than vmware workstation and/or virtualbox. I think these are "Type 2"? I already run VMWare w/s and VBox but is there a more performant solution? I saw a YouTube video from Citrix where a laptop was running XP and Vista. With the touch of a hot key they could switch between them. There was no visible underlying OS (there might be a hypervisor)? I have access to Citrix XenDesktop 3 enterprise edition evaluation. I realise this isn't for desktops but can i achieve my goal (geekiness) ? If i use the free XenServer 5.5.0 how do my client PCs access windows/linux/whatever from the xenserver? Is it via a thin client RDP type application? If so if there one for both windows and linux? Also if i do use XenServer can i use USB in either direction? What is Citrix receiver can i use that for (3) ? If so, is there some hotkey i can configure? whatever client is used to access the server software (whether it be on a different server or local) can i get full opengl/directx acceleration? what about Xen? i tried the Xen LiveCD but no clue as how to configure it. As you can see much confusion. Any help/pointers welcome. Cheers.

    Read the article

  • How to move the Windows 7 bootloader to the Windows 7 partition?

    - by pauldoo
    I recently installed Windows 7 in a triple boot setup alongside XP and Linux. When I was finished and was in the process of restoring the bootloader for Linux I discovered something strange about what Windows 7 had done. I discovered that Windows 7 had not installed a bootloader to it's own partition, and instead had instead set up a bootloader on the pre-existing XP partition that offers a choice between 7 and XP. This behaviour has been noticed by others. Now my booting is slightly odd. I have GRUB on the MBR which lets me choose between Linux and Windows. When I select Windows I have Grub boot to the XP partition where I get the 2nd choice between 7 and XP. Why doesn't the Windows 7 installer put the Windows 7 bootloader on the Windows 7 partition like all previous MS OSs? This is now going to be a real problem for me, as I now want to wipe the XP partition and install something else there (probably another non-MS OS). How can I move the bootloader for Windows 7 onto the Windows 7 partition, thus making it bootable and allowing me to safely wipe the XP partition?

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • a disk read error occurred

    - by kellogs
    Hi, ¨a disk read error occurred¨ appears on screen after choosing to boot into Windows XP from GRUB. [root@localhost linux]# fdisk -lu Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x48424841 Device Boot Start End Blocks Id System /dev/sda1 63 204214271 102107104+ 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 204214272 255606783 25696256 af HFS / HFS+ Partition 2 does not end on cylinder boundary. /dev/sda3 255606784 276488191 10440704 c W95 FAT32 (LBA) Partition 3 does not end on cylinder boundary. /dev/sda4 276490179 312576704 18043263 5 Extended /dev/sda5 * 276490240 286709759 5109760 83 Linux /dev/sda6 286712118 310488254 11888068+ b W95 FAT32 /dev/sda7 310488318 312576704 1044193+ 82 Linux swap / Solaris sda is a 160GB hard disk with quite a few partitions and 3 OSes installed. I am able to boot into Linux and Mac OS fine, but not into Windows anymore. The Windows system is located on /dev/sda1. I can not recall how exactly have I used testdisk but it once said that ¨The harddisk /dev/sda (160GB / 149 GB) seems too small! (< 172GB / 157GB)¨ or something simillar. So far I have tried to ¨fixboot¨ and ¨chkdsk¨ from a recovery console on the affected windows partition (/dev/sda1), the plug off power cord for 15 seconds trick, reinstalling GRUB, repairing the MFT and boot sector of the affected partition via testdisk, what next please ? Thank you!

    Read the article

  • Is it possible/practical to install and run Linux on a USB flash drive?

    - by Graeme Donaldson
    I'm going to replace my old 2004 vintage desktop PC soon and I have an idea of what I want to do, I'm just not sure if it's possible or realistic. In the time since I built the old PC it has slowly become less used as a PC and more as a file server, so I figured I'd build a small file server which could also function as a router/DHCP/DNS/whatever box. The idea is to base it on an Atom system. I have my eye on the Intel D510MO for the moment. This supports 2 SATA disks, and I'd prefer to dedicate those to data storage. I'd like to install Ubuntu Server or maybe Debian on a 8/16GB USB flash drive. I have seen plenty of tutorials on how to perform an installation from a USB drive, but I can't seem to find any info on actually booting and running the OS from USB flash. Is this even possible? Is it practical? This box will mostly be used for: Making backups of mine and my wife's notebooks via LAN. Will use SMB or NFS for this. Digital media storage, which will be accessed by a Mede8er box with no storage of its own. I will most likely use NFS for this.

    Read the article

  • How to fix display on external Samsung Syncmaster shifted to the right when connected to Macbook Pro?

    - by joe larson
    Is there something special I need to do to be able to use external LCD displays with my new MacBook Pro? Do I need extra software, or do I possibly need a different cable? I'm attempting to use an external display with my MBP. I've got a "Mini DisplayPort to VGA Female Adapter for Mac", plugged into the thunderbolt port on my MBP, which I understood should be compatible with thunderbolt. I've tried this with three different SyncMaster models: a B2330 (21.5"), a EX2220 (22"), and a third (also 22" ish) which I don't have the model # for -- but all are 1920x1080 resolution; plus an additional HP monitor of similar size and resolution. In all four cases, the MBP recognizes the screen and choses the correct resolution. However, the display is shifted over about 1 inch. This is true no matter if I change screen resolutions also. The controls on the monitor for horizontal position don't help. Also, sometimes (especially if I drag an app over into the second screen), the screen starts skipping left to right and having bands of fuzz. Additionally, the monitor will periodically blink off for a moment, trying to switch from Digital to Analog and back (the Syncmaster shows text on the screen to tell you it's trying to do this). Often when it comes back from one of these blank-outs, it will show OK (no skipping or fuzz) but still shifted right; then after a few seconds it will go wrong again skipping and fuzzy. This photo shows the worst of it. I've added red rectangles to show the physical edge of the screen, and a yellow rectangle to show the empty space on the left of the screen. (Sorry for the awful quality and lighting!) Also, it's worth noting I am on Mac OS X 10.6.7, and yes I have this update 1.4 installed.

    Read the article

  • How can I find files added to the system within X minutes of a specific time?

    - by Jack W-H
    I have done a fresh install of Mac OS X Mountain Lion today on a new MacBook. Because this was a new install, when I finally got round to configuring some of my own developer things, I was surprised to find some app had installed a binary into /usr/local/bin - a single binary called galileod. Interestingly, I can't find anything online about galileod. I had only installed the bare minimum of software at this point. Looking in the file columns I can see Date Modified was 9th November 2012, but Date Added to the system was today at 17:01. It's now 10:20PM and I can't remember which software I was installing at that point. So how do I find out which other files were installed to the system within, say, 5 minutes either side of 17:01? EDIT: I found out what galileod was by running galileod --help - it is a binary used with Fitbit to communicate with the USB dongle. So that's the mystery solved - but it would still be interesting to know how to find files added within X minutes of a timeframe for future reference.

    Read the article

  • Transferring 'Live' Documents to Another Computer

    - by waiwai933
    I was wondering if there was any OS/Application that has some support for transferring a document to another computer without having to save, transfer and then reopen. Basically, is there a way so that if I'm working on my desktop, I can click a button (or something similar) and then have the exact state of that computer/application transferred to another? For example, if I'm writing a document, is there a way to get it to computer B without saving it, putting the file on my flash drive, and having to reopen it? Edit: I just realized that this is possible through the wonderful phenomena known as cloud computing, but this is not the type of solution I'm looking for. Edit 2: I wanted to clarify: By 'save', I meant that I didn't want to have to save it to a special location, be that a (flash) drive or uploading to the web. Saving to the local hard drive is fine (and probably necessary, since technologies such as Bluetooth require the file to be saved somewhere). This is a bit inspired by a scene in Avatar, so I highly doubt that this actually exists... but if it does, I don't want to miss out.

    Read the article

  • What's the lowest cost, legal, Microsoft server stack you can assemble?

    - by McKAMEY
    Assuming that you have an app infrastructure that generally only requires: ASP.NET MVC / C# / .NET Database or NoSQL data store (must be accessible from C#) Here's the challenge to you server gods: What is the least expensive configuration that will allow you to deploy to production in a way that doesn't break any licensing rules? In what ways does this solution differ from the "standard" Microsoft deployment scenario? Where does this solution's performance break down once the app begins to scale? I'm not concerned about the hardware, only the server software itself. I would love to hear about any solutions you've personally put into production. Especially if they are unique alternatives. For ideas, consider some of the possible variations, a) any Microsoft server solutions where they have lowered the barrier to entry to compete with OSS, or b) any OSS alternatives to Microsoft products which perform at a similar level. An example of a): SQL Server 2008 Express Edition SP1 is a 100% free version of SQL Server which will scale to the needs of many smaller / early stage applications. An example of b): running the Mono Framework on Linux. An example of differing from the "standard" stack: running Mono on Linux will require a completely different server OS familiarity. None of the Windows-based knowledge really transfers. An example of breaking down under scale: SQL Server Express will only scale to 1GB of memory and 4GB of disk storage. After that point, the application will need to move to one of the paid versions of SQL Server.

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • SELinux blocking Samba directory listing

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it? Edit: SELinux probably is the problem, judging by the fact that the issue goes away when I set SELinux to "permissive" or issue setsebool -P samba_export_all_rw on - both of which are unacceptable for production environments. What the heck kind of context does a directory need to have on it for Samba users to actually get files from it? I consider rolling your own rules and/or context to be deeply sub-optimal.

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Mac Mini's internet very slow, every other device fine (PC, iPhone, Xbox 360)

    - by alex
    I recently haven't used my Mac Mini for about 5 days (however it was left on). I seem to be able to connect and get great download / upload speeds through my PC, Xbox 360, iPhone and parents' laptop. However, my Mac Mini is very slow. OS X's Mail.app is downloading mail at 0.4kbps and then dropping to 0. Skype file transfers are doing the same. Browsing the net is a terrible experience. It is taking 30 seconds or more to download basic pages. All of my devices connect wirelessly to a Netgear router / modem. I have tried giving the Mac Mini a manual IP, and renew DHCP lease, as well as flush DNS in Terminal. I have also rebooted the router / modem twice, and the Mac Mini twice. Do you know what could be causing this? Thanks Update This is very weird. It is also very slow accessing localhost (setup through MAMP) and also slow to access the Netgear router config pages.

    Read the article

  • Upgrade an Ubuntu 8.04 installation with VMware Server 1.0.8 and lots of guest OSes to Something Els

    - by Glyph
    I have an Ubuntu 8.04 (Hardy Heron) host machine which is running a whole slew of virtual machines in VMWare Server 1.0.8. Among other guest OSes, there is every release version of Ubuntu since 6.06, OpenSolaris 2009.06, and Windows XP. Right now I access these VMs from a variety of client OSes as well; Linux and Windows via the VMWare server console, and MacOS via X-forwarding the host machine's server console. I'd like to upgrade the host to Ubuntu 10.04 (Lucid Lynx), but from what I can tell, getting VMWare Server 1.x to work on a more recent version of Linux is a real pain. While VMware Server 2.x is a bit easier, it's still not packaged as Debian packages, so installing security updates is a big chore. As long as I'm upgrading anyway, I'd like to move to a virtualization solution that will allow me to automate applying updates. The options that I'm aware of right now are KVM (managed via virt-manager) and VirtualBox (as managed by its own tools or via its own libvirt bindings), but I'm open to other suggestions. For each option, I'd like to know how do I convert my guest images to the new format? am I going to have to re-activate my Windows guests (alternatively, "If the virtual hardware is different by default, can I avoid re-activation by changing some virtualization configuration to provide me with more similar virtual hardware") what are the management options like for each client OS (mac, linux, windows)? Thanks.

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >