Search Results

Search found 18191 results on 728 pages for 'single board'.

Page 191/728 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Forwarding HTTP Request with Direct Server Return

    - by Daniel Crabtree
    I have servers spread across several data centers, each storing different files. I want users to be able to access the files on all servers through a single domain and have the individual servers return the files directly to the users. The following shows a simple example: 1) The user's browser requests http://www.example.com/files/file1.zip 2) Request goes to server A, based on the DNS A record for example.com. 3) Server A analyzes the request and works out that /files/file1.zip is stored on server B. 4) Server A forwards the request to server B. 5) Server B returns file1.zip directly to the user without going through server A. Note: steps 4 and 5 must be transparent to the user and cannot involve sending a redirect to the user as that would violate the requirement of a single domain. From my research, what I want to achieve is called "Direct Server Return" and it is a common setup for load balancing. It is also sometimes called a half reverse proxy. For step 4, it sounds like I need to do MAC Address Translation and then pass the request back onto the network and for servers outside the network of server A tunneling will be required. For step 5, I simply need to configure server B, as per the real servers in a load balancing setup. Namely, server B should have server A's IP address on the loopback interface and it should not answer any ARP requests for that IP address. My problem is how to actually achieve step 4? I have found plenty of hardware and software that can do this for simple load balancing at layer 4, but these solutions fall short and cannot handle the kind of custom routing I require. It seems like I will need to roll my own solution. Ideally, I would like to do the routing / forwarding at the web server level, i.e. in PHP or C# / ASP.net. However, I am open to doing it at a lower level such as Apache or IIS, or at an even lower level, i.e. a custom proxy service in front of everything. Thanks.

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • sniffing on a switched LAN

    - by shodanex
    Hi, I often find myself in the position of having to sniff on a connection between for example an arm board I am developing on, and another computer on the network, or out of the network. The easy situation is when I can install a sniffer on the computer talking to the embedded device. When it is not possible, I currently install an old 10Mb/s HUB. However I am afraid my HUB might stop working, and I would like to know some alternative. Here are the alternatives I could think of : Buy another HUB. Is that still possible ? Have some sort of ethernet sniffing bridge, like what they do for USB. I am afraid this kind of device is expensive. use ARP poisoning.

    Read the article

  • How to use Timer broadcast on Multi-Processor system with linux 3.10?

    - by kevin.ji
    Hardware: ARM Cortex-A9 * 2 Software: linux-3.10.0 The platform has 2 cores of arm cortex-a9. Item CONFIG_LOCAL_TIMERS is not set in linux menuconfig. I want to use only one hardware timer to supply tick for all cpu. Interrupts looks like: CPU0 CPU1 57: 6697 0 GIC timer 81: 213 0 GIC uart-pl011 103: 0 0 GIC gmac0 104: 0 0 GIC gmac1 IPI0: 0 1 CPU wakeup interrupts IPI1: 0 0 Timer broadcast interrupts IPI2: 967 866 Rescheduling interrupts IPI3: 0 0 Function call interrupts IPI4: 1 2 Single function call interrupts IPI5: 0 0 CPU stop interrupts IPI6: 0 0 CPU backtrace Err: 0 Timer broadcast interrupts counter does not add. And it looks like that cpu1 does not work at all.But this method works well with linux-3.4, and the interrupt info looks as below in linux-3.4: # cat /proc/interrupts CPU0 CPU1 57: 8596 0 GIC timer 81: 91 0 GIC uart-pl011 103: 0 0 GIC gmac0 104: 0 0 GIC gmac1 IPI0: 0 8560 Timer broadcast interrupts IPI1: 884 1020 Rescheduling interrupts IPI2: 0 0 Function call interrupts IPI3: 0 6 Single function call interrupts IPI4: 0 0 CPU stop interrupts IPI5: 0 0 CPU backtrace Err: 0 The count of Timer broadcast interrupts is adding. And all of cpus work well. I don't know why. Any answer is welcome. :)

    Read the article

  • Are there pitfalls to using incompatible RAM (frequencies) in motherboards?

    - by osij2is
    I'd like to use 2 x 4GB DDR3 1600 dimms in a motherboard capable of only DDR3 1066. The DDR3 1600 is on sale and the cost is identical to 1066 dimms. It'd be nice to have these faster sticks around should i upgrade the motherboard. I assume the RAM can under clock itself or be changed in the BIOS. While obviously it's less than ideal situation, I don't know if there are other unintended consequences in terms of stability, performance and longevity of the board and said RAM. Am I doing any damage to the memory controller or RAM? I've always bought RAM at the max speed specified for the motherboard and I've never gone over so I'm not sure if there any caveats to this at all. Edit: I intend to use the RAM in pairs. I know that mixing RAM speeds is just a bad idea.

    Read the article

  • unable to access shared drives n win7

    - by colin
    OK, this is doing my head in. Not your usual Win7 sharing problem. Just built a win 7 comp to go onto my home lan and be accessed by all on the network, mostly running XP SP3. Installed it as a single HD+DVD system, got it happy, then added my storage drives, set them up in the right order and letters, rebooted, and shared them. Couldn't access the computer at all from any XP machine. Set the pasword thingy in win7 to NO, and now I can "see" all four shared drives from any machine, but can access only two of them. Please tell me whats going on as I've done nothing different from one drive to the other, just installed and set up drive letters as normal, then shared the damn things. The odd thing about it? the two 1.5T drives that have a lot of data on board are accessable, the two nearly empty 500G ones are not. ANY ideas? Cheers, Colin

    Read the article

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Fix for OpenSolaris with no gcc vs. Nexenta with no ext3

    - by Jake Wharton
    I'm attempting to migrate my server from linux to a Solaris variant during a hardware upgrade. The machine is based around an Abit AN-M2 board which has an NForce chipset. I have what seems to be a chicken-and-egg problem of sorts: OpenSolaris 2009.06 does not recognize the NIC and I cannot compile the drivers for it as it also lacks gcc. I haven't tested as to whether or not I can mount an ext3 partition yet but its moot if there is no networking. Nexenta 3.0b3 recognizes the NIC but I cannot get the ext3 drives mounted due to FSWfspart refusing to install. I do not know much about Solaris but I wager this is due to the fact that Nexenta is based around Debian as well. While I am reusing the mobo/CPU combo, I did just spent a lot of money on the other hardware around it and would very much like to get it up and running smoothly and quickly. Does anyone have any suggestions that are not: Get a new mobo/CPU Run another OS Use alternate NIC

    Read the article

  • Certain Japanese characters aren't displayed properly

    - by Nisto
    On the following site: http://www.nciku.com/search/radical the first 2 characters on the second row of the "Step 2" table aren't displayed properly. All other characters look fine. I tried re-installing the Asian fonts via the checkboxes regarding Asian fonts in the "Regional and Language Options" control panel applet. I have tried removing every single Font from the Fonts folder (some were ofcourse not possible to remove), and re-installing them all again. I did this by... Running cmd Closing down the explorer process In cmd; using the command DEL /F /S /Q * in the Fonts folder Putting in my XP SP3 Retail disc In cmd; using expand -r *.tt_ in the I386 folder on the XP disc (and any other font file, in the I386\LANG folder) I also tried installing this pack from Microsoft, but this solved nothing either. I even tried running my browser (Firefox) through AppLocale. And changing character encoding -- again, does not help. I've also tried viewing the page in Internet Explorer. What could be wrong? I have checked my Fonts folder, to make sure that every single font available on the XP disc is available in WINDOWS\Fonts. What shows in the first square on the second row - I can't really tell what it's supposed to look like (but it's not the proper character)... but the second square shows a rectangular symbol containing HEX code. I've been in this situation before -- and it has been when I've been missing fonts. But how could I possibly be missing a necessary font? Shouldn't it be provided in the Asian "font packages"? I've talked to some other users that has viewed the page, and they had no problems displaying those characters on second row - even though they're only using the fonts provided on the Windows installation disc. Windows XP Professional Service Pack 3 (x86 - with latest updates) Firefox 3.6.15

    Read the article

  • System fan connection on Mini-ITX server case and motherboard

    - by Robert
    The case: Newegg listing for Chenbro ES34169-BK-120 The motherboard: GIGABYTE GA-D525TUD (Newegg has it as well but I can't post more than one link due to being a new user) As far as the question, the system fan I have no idea how to get these to connect to the motherboard. There are two connectors which then go to extension fan cables. So when you connect these, it should be good, but the extension cables are two connectors that go to the system fans and a female molex connector (if you click the Chenbro listing, look at the 6th picture and you'll see the two connections being made and the molex connector just kind of hanging out in the case). Is there a piece I'm missing to get it to connect to the 3 pin sys_fan section of the board? Thanks,

    Read the article

  • Ubuntu 9.1 Only Sees 244 MB RAM, while BIOS and Windows Sees 1.5 GB

    - by nicorellius
    I have 1.5 GB of RAM installed on an older Dell, Pentium 4. I just installed Ubuntu 9.1 and the system is only seeing 244 MB of RAM, even though there is 1.5 GB on the system. The BIOS sees all of it. I ran a Knoppix disc and it only saw 25 MB upon booting. I made no particular changes to the installation taht would affect this. I looked through the BIOS and the only setting I could see was the AGP aperture. Not even sure what this is. Anyone know where I went wrong? I also tried moving the memory modules around on the board. Booted with the 1 GB stick, still saw 244 MB.

    Read the article

  • mysql INNODB inserts very slow

    - by 133794m3r
    The database's schema is as follows. CREATE TABLE `items` ( `id` mediumint( 8 ) unsigned NOT NULL AUTO_INCREMENT , `name` varchar( 45 ) NOT NULL , `main_type` tinyint( 4 ) NOT NULL , `rarity` tinyint( 4 ) NOT NULL , `stack_size` smallint( 6 ) NOT NULL , `sub_type` tinyint( 4 ) NOT NULL , `cost` mediumint( 8 ) unsigned NOT NULL , `ilvl` smallint( 6 ) unsigned NOT NULL DEFAULT '0', `flavor_text` varchar( 250 ) NOT NULL , `rlvl` tinyint( 3 ) unsigned NOT NULL , `final` tinyint( 4 ) NOT NULL DEFAULT '0', PRIMARY KEY ( `id` ) ) ENGINE = InnoDB DEFAULT CHARSET = ascii; Now, doing an insert on this table takes 0.22 seconds. I don't know why it's taking so long to do a single row insert. Reads are really really fast something like 0.005 seconds. With using the example configuration from here dev mysql innodb it averages ~0.002 to ~0.005 seconds. Why it takes more than 100x more time to do a single insert makes no sense to me. My computer is as follows. OS:Debian Sid x86-x64, Mysql 5.1, RAM:4GB ddr2, cpu 2.0Ghz dual core, HDD 7200RPM 32MB cache 640GB. Why it's taking almost 100x as much time for a SELECT * FROM items; vs INSERT INTO items ...; will never make any sense to me. It's still a small table at only 70 rows, and took that long even when it had 0 rows.

    Read the article

  • Should I disable write caching on my Windows 2008 VM?

    - by javano
    I have a Windows Server 2008 x64 Standard virtual machine that runs on a machine with a hardware RAID controller, a Perc 6/i, which has a battery on-board. Doing everything I can for additional performance, I think I should disable this. Is this very dangerous though? My understand is that Battery Backed Write Caching gives a performance boost to the host OS, telling it the write was complete when they are still sitting in flash waiting to be written. However, I can't see how it would be detrimental to performance, but is there a gain (even if marginal) to enabling it / disabling it? P.s. There machine has a backup power. Here is a screen shot for clarification:

    Read the article

  • Overriding some DNS entries in BIND for internal networks

    - by Remy Blank
    I have an internal network with a DNS server running BIND, connected to the internet through a single gateway. My domain "example.com" is managed by an external DNS provider. Some of the entries in that domain, say "host1.example.com" and "host2.example.com", as well as the top-level entry "example.com", point to the public IP address of the gateway. I would like hosts located on the internal network to resolve "host1.example.com", "host2.example.com" and "example.com" to internal IP addresses instead of that of the gateway. Other hosts like "otherhost.example.com" should still be resolved by the external DNS provider. I have succeeded in doing that for the host1 and host2 entries, by defining two single-entry zones in BIND for "host1.example.com" and "host2.example.com". However, if I add a zone for "example.com", all queries for that domain are resolved by my local DNS server, and e.g. querying "otherhost.example.com" results in an error. Is it possible to configure BIND to override only some entries of a domain, and to resolve the rest recursively?

    Read the article

  • Minimalistic flatfile "wall" software with authentication and RSS?

    - by Nicolas Raoul
    I am looking for an open-source minimalistic "message board" PHP software. Not a forum, more something like one simple facebook wall. The only thing a user can do is post a new message. With RSS, and able to run on flat files (no database) with Apache+PHP Authentication based on a configuration file, no management UI needed. For now I use this software, but it lacks RSS: http://nrw.free.fr/data/projects/pano/demo/index.php?pano=ifc Anyone knows a software that matches my description? Thanks! Usage: communication between my family's 5 members living on different continents.

    Read the article

  • LDAP, Active Directory and bears, oh my!

    - by Tim Post
    What I have: Workstations running Ubuntu Jaunty mounting /home on a remote NFS server. User accounts are still created locally on each individual workstation. Workstations running Windows XP / Vista NFS server (as noted above) Windows 2008 server All machines share a single private network (LAN). What I need to accomplish: A single, intuitive (GUI driven) place for an office administrator to create user accounts. This should let anyone login to their (linux or windows) workstation, then fire up remote desktop and use the same login to the Windows 2008 server, from any machine on the network. I have read so much on samba, LDAP vs AD, etc and now I'm even more confused than I was before I began researching the problem. Ideally, Linux and Windows users should be able to get to their local files once logged into the Win2008 server. I am a programmer, not an interoperability guru and I'm completely lost on where to even start trying to accomplish this, plus I've run out of things to Google. How would you do this? Is it even possible?

    Read the article

  • Mac OS X duplex printing problem: one- vs. multi-paged documents

    - by Christian Lindig
    I like to print on pre-printed stationery using the Preview.app and a duplex-capable HP Color Laserjet 4700 (PostScript) printer. The print dialog handles one and two-paged documents differently: the paper needs to be placed differently into the tray if the document contains one page versus when it contains two pages. This is not obvious when printing on plain paper but becomes obvious when front and reverse side of sheets are marked. Otherwise the first page would end up on the reverse side of the first sheet. I believe the problem is caused by the printer driver setting duplex printing to false (using the PostScript setpagedevice operator) when emitting a single-page document versus keeping it set to true when emitting multi-page documents. All this despite that duplex printing is always specified in the printer dialog. When printing a single-sided document, duplex=true and duplex=false seem to make a difference with respect which side of a sheet gets printed on. It would be also helpful if others could confirm the problem actually exists. I suspect this problem is not limited to specific printers. I'm on OS X 10.6 and I checked two different HP printers.

    Read the article

  • nginx config woes for multiple subdomains & domains

    - by Peter Hanneman
    I'm finally moving away from Apache and I've got the latest development version of nginx running on a fully updated Ubuntu 10.04 VPS. I've got a single dedicated IP for the box (1.2.3.4) but I've got two separate domains pointing to the server: www.example1.com and www.example2.net. I would like to map the fallowing relationships between urls and document roots in the config: www.example1.com / example1.com -> /var/www/pub/example1.com/ subdomain.example1.com -> /var/www/dev/subdomain/example1.com/ www.example2.net / example2.net -> /var/www/pub/example2.net/ subdomain.example2.net -> /var/www/dev/subdomain/example2.net/ Where the name of the requested subdomain is a folder under /var/www/dev/. Ideally a request for a non-existent subdomain(no matching folder found) would result in a rewrite to the public site (eg: invalid.example1.com -- www.example1.com) however a mere "404 Not Found" wouldn't be the worst thing in the world. It would also be nice if I didn't need to modify the config every time I mkdir a new subdomain folder - even better if I don't need to edit it for a new domain either...but now I'm getting greedy... :p Although in my defense Apache did all of this with a single directive. Does anyone know how I can efficiently mimic this behavior in nginx? Thanks in advance, Peter Hanneman

    Read the article

  • I want to upgrade my GPU to MSI NVIDIA N630GT-MD4GD3 4 GB DDR3 Graphic card

    - by jatin singh
    Hello every body this is my first post here.. I want to know about my motherboard's PCI express version , my motherboard as I don't have 10 reputation so I am providing image link here http://i.stack.imgur.com/6PWV9.png I have PCI express slot and I want to upgrade it to MSI NVIDIA N630GT-MD4GD3 4 GB DDR3 Graphic card.. this GPU has PCI exp version 2. Here in my city,the shopkeeper said that we have to check if the PCI express of my board is compatible to that GPU but I want to purchase it from flipkart because they sell it for less money (sorry for bad English :/) I mailed to my computers company (Acer) but they didn't reply to any of my mail so my friend told me about Super User.

    Read the article

  • Why does the EFI shell not detect my Windows DVD?

    - by Oliver Salzburg
    I'm currently looking into (U)EFI for the first time and am already really confused. I insert the Windows Server 2008 R2 Enterprise disc into the DVD-ROM and boot into the EFI shell. The shell will automatically list all detected devices, which are: blk0 :CDRom - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master)/CDROM(Entry0) blk1 :BlockDevice - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master) To my understanding, it should have already detected the filesystem on blk0 and should have mounted it as fs0. Why is that not happening? If I insert a USB drive, it gets mounted just fine. The board is an Intel S5520HC in case that makes a difference.

    Read the article

  • What is the best way to create a failover cluster for my IIS website?

    - by ObligatoryMoniker
    Our eCommerce website www.tervis.com currently runs on two servers: SQL server: 2005 x 86 on Windows Server 2003 Standard x86 with a single dual core processor and 4 gb of memeory IIS server: Windows Server 2008 Web edition x64 with dual quad core hyper threaded processors and 32 gb of memory Tervis.com's revenue has steadily grown to the point where we need to have redundant servers deployed with a fail over mechanism so that we do not have any down time. Because the SQL server is so underpowered compared to the web server my thought was to purchase: 2 x SQL Server 2008 R2 web edition x64 single processor license 2 x Windows Server 2008 R2 Web Edition Licenses 1 x New Physical dual quad core 32 GB server 1 x F5 Load Balancer I need the Windows Server 2008 R2 Web Edition licenses so that I can run SQL and IIS on the same box for both of these servers. The thought is to run this as an active/passive fail over cluster that could be upgraded to an active/active cluster if we purchased the additional SQL licensing. The F5 load balancer would serve as the device that monitors the two servers and if the current active one stops responding then fails over to using the other server. To be clear this is not windows clustering but simply using a load balancer to fail over between two computers so that you now have a cluster in the general sense. Is this really the best way to accomplish what I need? Is there some way to leverage the old server 2003 SQL server to function as the devices that funnels http requests to the appropriate active server and then fails over if a problem occurs? Is there any third party clustering software that might help me accomplish this in a simpler fashion?

    Read the article

  • Finding out if an IP address is static or dynamic?

    - by Joshua
    I run a large bulletin board and I get spammers every now and again. My moderation team does a good job filtering them out but every time I IP ban them they seem to come back (I'm pretty sure it's the same person on some occasions, as the post patterns are exactly the same as are the usernames) but I'm afraid to ban them by IP address every time. If they are on a dynamic IP address, I could be banning innocent users later down the line when they try to get to my forum through SERPs, but if I ban only via static IPs I know that I'm only banning that one person. So, is there a way to properly determine if an IP address is static or dynamic? Thanks.

    Read the article

  • Bash Parallelization of CPU-intensive processes

    - by ehsanul
    tee forwards its stdin to every single file specified, while pee does the same, but for pipes. These programs send every single line of their stdin to each and every file/pipe specified. However, I was looking for a way to "load balance" the stdin to different pipes, so one line is sent to the first pipe, another line to the second, etc. It would also be nice if the stdout of the pipes are collected into one stream as well. The use case is simple parallelization of CPU intensive processes that work on a line-by-line basis. I was doing a sed on a 14GB file, and it could have run much faster if I could use multiple sed processes. The command was like this: pv infile | sed 's/something//' > outfile To parallelize, the best would be if GNU parallel would support this functionality like so (made up the --demux-stdin option): pv infile | parallel -u -j4 --demux-stdin "sed 's/something//'" > outfile However, there's no option like this and parallel always uses its stdin as arguments for the command it invokes, like xargs. So I tried this, but it's hopelessly slow, and it's clear why: pv infile | parallel -u -j4 "echo {} | sed 's/something//'" > outfile I just wanted to know if there's any other way to do this (short of coding it up myself). If there was a "load-balancing" tee (let's call it lee), I could do this: pv infile | lee >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) >(sed 's/something//' >> outfile) Not pretty, so I'd definitely prefer something like the made up parallel version, but this would work too.

    Read the article

  • Virtualizing OpenSolaris with physical disks

    - by Fionna Davids
    I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2). I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas. I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things. Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative. Essentially, 3 Disks - OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

    Read the article

  • Can I run Excel 2010 on a server?

    - by Glen Little
    This question is not about a person using Excel on a computer that happens to have an Windows Server OS. And it is not about using any Sharepoint services features! The question is about automated processes that use code (Office Automation) to open Excel files, manipulate them, run calculations, read data, save copies of the file and close the files... all in code. In previous versions of Excel the licensing agreement prevented use on a public server, notes from Microsoft warned about the problems trying to use Office Automation in a server environment, and we were warned that Excel was single threaded and not designed for use on a server. Most of the articles about this were written before Office 2010. But now, Excel 2010 is designed to work on a High Performance Computing server using HPC Services for Excel. One HPC document mentions "Windows HPC Server 2008 R2 includes a comprehensive pop-up manager that can handle occasional dialog boxes and pop-up messages". So my question is... is it now "safe" to run code that automates Excel 2010 on a "normal" server without using the HPC services? If not, can the HPC Services for Excel work on a single server? I don't need the high performance, distributed computing, aspect of HPC Services for Excel... just the ability to run Excel on a server. Can that now be done? Thanks, Glen

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >