Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 738/1338 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Foreach loop with 2d array of objects

    - by Jacob Millward
    I'm using a 2D array of objects to store data about tiles, or "blocks" in my gameworld. I initialise the array, fill it with data and then attempt to invoke the draw method of each object. foreach (Block block in blockList) { block.Draw(spriteBatch); } I end up with an exception being thrown "Object reference is not set to an instance of an object". What have I done wrong? EDIT: This is the code used to define the array Block[,] blockList; Then blockList = new Block[screenRectangle.Width, screenRectangle.Height]; // Fill with dummy data for (int x = 0; x <= screenRectangle.Width / texture.Width; x++) { for (int y = 0; y <= screenRectangle.Height / texture.Width; y++) { if (y >= screenRectangle.Height / (texture.Width*2)) { blockList[x, y] = new Block(1, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } else { blockList[x, y] = new Block(0, new Rectangle(x * 16, y * 16, texture.Width, texture.Height), texture); } } }

    Read the article

  • nagios wrongly reports packet loss

    - by Alien Life Form
    Lately, on my nagios 3.2.3 install (CentOS5, monitoring ~ 300 hosts, 1150 services) has sdtarted to occasionally report high packet loss on 50-60 hosts at a time. Problem is it's bogus. Manual runs of ping (or its own check_ping binary) finds no fault with any of the affected hosts. The only possible cures I found so far are: run all the checks manually (they will succeed but it may act up again on next check) acknowledge and wait for the problem to go away (may take several ours) I suspect (but have no particular reason other than single rescheduled checks succeeding) that the problem may lay with all the checks being mass scheduled together - in which case introducing some jitter in the scheduling (how?) might help. Or it may be something completely different. Ideas, anyone?

    Read the article

  • Mono is frequently used to say "Yes, .NET is cross-platform". How valid is that claim?

    - by Thorbjørn Ravn Andersen
    In What would you choose for your project between .NET and Java at this point in time ? I say that I would consider the "Will you always deploy to Windows?" the single most important decision to make up front in a new web project, and if the answer is "no", I would recommend Java instead of .NET. A very common counter-argument is that "If we ever want to run on Linux/OS X/Whatever, we'll just run Mono", which is a very compelling argument on the surface, but I don't agree for several reasons. OpenJDK and all the vendor supplied JVM's have passed the official Sun TCK ensuring things work correctly. I am not aware of Mono passing a Microsoft TCK. Mono trails the .NET releases. What .NET-level is currently fully supported? Does all GUI elements (WinForms?) work correctly in Mono? Businesses may not want to depend on Open Source frameworks as the official plan B. I am aware that with the new governance of Java by Oracle, the future is unsafe, but e.g. IBM provides JDK's for many platforms, including Linux. They are just not open sourced. So, under which circumstances is Mono a valid business strategy for .NET-applications?

    Read the article

  • How do I force specific permissions for new files/folders on Linux file server?

    - by humble_coder
    I'm having an issue with my install of Ubuntu 9.10 (file server) and its samba permissions. Logging in and reading works fine. However, creation of new directories by users restricts access for other users. For instance, if Bob (Windows user who maps the drive) creates a folder in the directory, Jane (Mac user that simply smb mounts) can read from it, but can't write to it -- and vice versa. I then must go CHMOD 777 the directory for everyone to be happy. I've tried editing the "create/directory mask", and "force" options in the smb.conf file but this doesn't seem to help. I'm about to resort to CRONTABing a recursive chmod routine, although I'm sure this isn't the fix. How do I get all new items to always be 777? Does anyone have any suggestions to fix this ever-occurring situation? Best

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • Solaris 11 Customer Maintenance Lifecycle

    - by user12244672
    Hi Folks, Welcome to my new blog, http://blogs.oracle.com/Solaris11Life , which is all about the Customer Maintenance Lifecycle for Image Packaging System (IPS) based Solaris releases, such as Solaris 11. It'll include policies, best practices, clarifications, and lots of other stuff which I hope you'll find useful as you get up to speed with Solaris 11 and IPS.   Let's start with a version of my Solaris 11 Customer Maintenance Lifecycle presentation which I gave at this year's Oracle Open World and at the recent Deutsche Oracle Anwendergruppe (DOAG - German Oracle Users Group) conference in Nürnberg. Some of you may be familiar with my Patch Corner blog, http://blogs.oracle.com/patch , which fulfilled a similar purpose for System V [five] Release 4 (SVR4) based Solaris releases, such as Solaris 10 and below. Since maintaining a Solaris 11 system is quite different to maintaining a Solaris 10 system, I thought it prudent to start this 2nd parallel blog for Solaris 11. Actually, I have an ulterior motive for starting this separate blog.  Since IPS is a single tier packaging architecture, it doesn't have any patches, only package updates.  I've therefore banned the word "patch" in Solaris 11 and introduced a swear box to which my colleagues must contribute a quarter [$0.25] every time they use the word "patch" in a public forum.  From their Oracle Open World presentations, John Fowler owes 50 cents, Liane Preza owes $1.25, and Bart Smaalders owes 75 cents.  Since I'm stinging my colleagues in what could be a lucrative enterprise, I couldn't very well discuss IPS best practices on a blog called "Patch Corner" with a URI of http://blogs.oracle.com/patch.  I simply couldn't afford all those contributions to the "patch" swear box. :) Feel free to let me know what topics you'd like covered - just post a comment in the comment box on the blog. Best Wishes, Gerry.

    Read the article

  • How/where to find all Update (msu) packages for Windows 7 (Enterprise)

    - by Rudi
    Hi, I've been using Wim2Vhd to create native boot vhd files. (I'm using this to keep several development environments ready, I'm a developer, I know -- I need help ;-) Now the first boot always ends up in several minutes installing all of the windows updates... I' know I could avoid this if I had a massive list of qfe (.msu) packages that I could supply to the command line of Wim2Vhd. Where can I find all of the updates so I can download them. I have found this: http://www.megaleecher.net/Downloading_Microsoft_Windows_7_Offline_Updates but I'd like a more official source and I'd like an easier way to download them all. I'm a developer, so if I find the information (web service?) to get all the current packages, I'll build a tool to download them to a single location and generate a list of them for use with Wim2Vhd.

    Read the article

  • svchost.exe @ 100% disk utilization vs. Outlook.ost

    - by Aszurom
    Vista x32 box with Outlook 2007. Outlook is not running. Hasn't been fired up for several reboots. I stopped WMI service and Windows Search service. Machine is mostly quiet, and then servicehost.exe launches an instance and starts banging away at Outlook.ost file. I can't determine what is causing it. I'm watching it in processmon, and trying to investigate it with preocessexplorer. Not having much luck at figuring out why the machine is so interested in that file. NOTHING is running that should be touching it.

    Read the article

  • switch's mgmt-ip is not remotely reachable.

    - by RainDoctor
    Switch model: Netgear FSM7352PS mgmt-ip: 192.168.1.100/24 Vlan id: 1 (default) There are couple of hosts in this Vlan: 192.168.1.2 (esxi console), for instance. 192.168.1.1 is the firewall/router interface. I can ping 192.168.1.1 and 192.168.1.2 from other vlans, say, 172.31.0.0/24 I can ssh to 192.168.1.2 from 172.31.0.0/24 I can't ping 192.168.1.100 from 172.31.0.0/24 However, I can ping 192.168.1.100 from 192.168.1.2 or from my laptop connected to that vlan (192.168.1.11). I can connect to the web GUI from my laptop when I am in that Vlan. Can anyone shed some light on why I am not able to connect from other vlans?

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Sound entirely stopped working on Windows 8 on a Macbook Pro

    - by Kelvin Bongers
    I am currently running Windows 8 (downloaded from DreamSpark) on a Macbook Pro. This worked fine for a while but suddenly all audio stopped working. When I go to "Playback devices" and hit "Test" on the speakers I get treated with the following message: This also shows up right after I try restarting. I tried disallowing exclusive usage of the devices but it makes no difference. Edit: After some looking around I tried changing the sample rate and bit depth so I would get a dialog screen to force Windows to go around the program that's using it. I did get the dialog but then instead of changing it I got the following error: Edit 2: I narrowed it down to a single service failing to start, the Multimedia Class Scheduler service fails to start with the following error:

    Read the article

  • 12.04, nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • Using excel, how can I count the number of cells in a column containing the text "true" or "false"?

    - by Jay Elston
    I have a spreadsheet that has a column of cells where each cell contains a single word. I would like to count the occurrences of some words. I can use the COUNTIF function for most words, but if the word is "true" or "false", I get 0. A B 1 apples 2 2 true 0 3 false 0 4 oranges 1 5 apples In the above spreadsheet table, I have these formulas in cells B1, B2, B3 and B4: =COUNTIF(A1:A5,"apples") =COUNTIF(A1:A5,"true") =COUNTIF(A1:A5,"false") =COUNTIF(A1:A5,"oranges) As you can see, I can count apples, but not true or false. I have also tried this: =COUNTIF(A1:A5,TRUE) But that does not work either. Note -- I am using Excel 2007.

    Read the article

  • How do you enable multi-core virtualization in Windows 8 Pro?

    - by Greg B
    I've just got a new Dell Vostro 470 with a quad core (8 threads) i7 3770 and I'm trying to run virtual machines on it, which works fine, except if I want to assign multiple cores to a VM. I've checked the bios which states Intel Virtualization Technology [Enabled], but both Hyper-V and VirtualBox will only allow me to assign a single core. If I run the Intel Processor Identification Utility on the host OS it tells me that Intel Virtualization Technology isn't supported by the processor, but according to the Intel website, it is. So whats going on? Have Dell clipped the i7's wings? Is there some config in Windows I need to change?

    Read the article

  • Virtual firewall to protect hypervisor

    - by manutenfruits
    I am running an Ubuntu Server 12.10 as a single host connected to a NATed router connected using PPPoE to a optical fiber modem. This server is meant to be accessed from the Internet, but also to be used from the LAN as a SVN, MySQL and what not... The issue is that the router is not customizable enough to serve, so I was thinking about creating a virtual pfSense firewall using KVM inside of the server itself, removing the need of the router. Is this possible? Can the host ignore and block all traffic coming to itself, but not for the firewall? I am aware this is not the most desirable environment, I accept suggestions based on budget!

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • How Can I assign an IP address to my virtual Windows Server, so that I can start using it almost as a VPS?

    - by Nelson Symonds
    We are a small office set up with two PC's out of which one of my PCs runs 24hrs. Its almost equivalent to a small server, but right now we're in need of a server which is why I am planning to keep my machine as well as a server into a single PC. I've used VMware Workstation to create a powerful Windows Server 2008 within my PC and I want to attach it to my Network Switch through the same PC where I am hosting it. I want to use it almost like a physical server with an IP address and everything so that I can connect from one Pc to the Server directly or my applications can connect to Server straight with the IP address. How should I do this? Step by step instructions would be appreciated. Thanks in Advance, Best regards Nelson

    Read the article

  • Book: DevOps for Developers

    - by Tori Wieldt
    We all know development and operations often act like silos, with "Just throw it over the wall!" being the battle cry. Many organizations unwittingly contribute to gaps between teams, with management by (competing) objectives; a clash of Agile practices vs. more conservative approaches; and teams using different sets of tools, such as Nginx, OpenEJB, and Windows on developers' machines and Apache, Glassfish, and Linux on production machines. At best, you've got sub-optimal collaboration, at worst, you've got the Hatfields and the McCoys.  The book DevOps for Developers helps bridge the gap between development and operations by aligning incentives and sharing approaches for processes and tools. It introduces DevOps as a modern way of bringing development and operations together. It also means to broaden the usage of Agile practices to operations to foster collaboration and streamline the entire software delivery process in a holistic way. Some single aspects of DevOps may not be new, for example, you may have used the tool Puppet for years already, but with a new mindset ("my job is not just to code, it's to serve the customer in the best way possible") and a complete set of recipes, you'll be well on your way to success. DevOps for Developers also by provides real-world use cases (e.g., how to use Kanban or how to release software). It provides a way to be successful in the real development/operations world. DevOps for Developers is written my Michael Hutterman, Java Champion, and founder of the Cologne Java User Group. "With DevOps for Developers, developers can learn to apply patterns to improve collaboration between development and operations as well as recipes for processes and tools to streamline the delivery process," Hutterman explains.

    Read the article

  • Encoding video stream for playback on a vanilla Windows XP with mencoder

    - by Tamás
    I have a bunch of PNG files, generated from a script. They represent consecutive frames of a video sequence and I'd like to encode them into a single AVI file (or some other video format) using mencoder. What parameters should I use to ensure that the video can be viewed on a vanilla Windows XP using Windows Media Player with no extra codecs installed apart from the default ones? So far I've tried -ovc lavc -lavcopts vcodec=wmv2 and -ovc lavc -lavcopts vcodec=msmpeg4 with no success. (Background story: some of the people I'm collaborating with on a scientific project cannot install any codecs on their university computers without the help of the local sysadmins, who are of course not very willing to install anything. I'd like to ensure that they can also view the video files I am creating).

    Read the article

  • How do I remove a LOT of indexed pages from Google?

    - by Thierry
    A few weeks ago we have figured out that Google has indexed some information we would rather keep in some confidentiality, in the format of individual PDF files. Our assumption was that this was a problem with our robots.txt we had overlooked. Even though we are not sure whether or not this is the case, we are certain that the robots.txt file is in a valid format and is, according to Google's webmaster tools, blocking the files. However, even after this adjustment that has been made weeks ago, Google still has the PDF files indexed, but does tell us further information cannot be provided due to the robots.txt file being present. As you can hopefully understand, this is unwanted behaviour due to the nature of the documents. I am aware that there is a request page being provided by Google for this purpose, but there are a lot of files. Is there an easier way to get Google to remove all of the files from its search engine? If not, is there anything else you could advise us to do besides manually requesting Google to remove every single page? Thanks in advance.

    Read the article

  • nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • Google Talk and Video outside of GMail

    - by mankoff
    I'd like to use Google Talk/Video with having the full gmail or igoogle interface displayed. The ideal setup would be the lightweight popout interface (link below) in a small Fluid.app single instance browser as a stand-alone desktop app. If I log into GMail, the chat sidebar has a phone icon so I can use Google Voice, and a camera icon next to me and some of my contacts. If I log into iGoogle, the chat sidebar has a camera next to me and some contacts, but no phone. I would like to have video chat (and perhaps the phone option) elsewhere. Google provides a chat talkgadget popout URL: http://talkgadget.google.com/talkgadget/popout but there is no phone or camera icon accessible.

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >