Search Results

Search found 1631 results on 66 pages for 'alan smith'.

Page 19/66 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Bad idea to keep htop running?

    - by Michael T. Smith
    I'm now monitoring multiple servers (3) and in the coming weeks that'll increase (towards 5 or 6). I've been keeping three terminal windows open running htop via SSH and I'm now wondering if there are any downsides to having a connection constantly open to production servers?

    Read the article

  • Secure openVPN using IPTABLES

    - by bob franklin smith harriet
    Hey, I setup an openVPN server and it works ok. The next step is to secure it, I opted to use IPTABLES to only allow certain connections through but so far it is not working. I want to enable access to the network behind my openVPN server, and allow other services (web access), when iptables is disabaled or set to allow all this works fine, when using my following rules it does not. also note, I already configured openVPN itself to do what i want and it works fine, its only failing when iptables is started. Any help to tell me why this isnt working will appreciated here. These are the lines that I added in accordance with openVPN's recommendations, unfortunately testing these commands shows that they are requiered, they seem incredibly insecure though, any way to get around using them? # Allow TUN interface connections to OpenVPN server -A INPUT -i tun+ -j ACCEPT #allow TUN interface connections to be forwarded through other interfaces -A FORWARD -i tun+ -j ACCEPT # Allow TAP interface connections to OpenVPN server -A INPUT -i tap+ -j ACCEPT # Allow TAP interface connections to be forwarded through other interfaces -A FORWARD -i tap+ -j ACCEPT These are the new chains and commands i added to restrict access as much as possible unfortunately with these enabled, all that happens is the openVPN connection establishes fine, and then there is no access to the rest of the network behind the openVPN server note I am configuring the main iptables file and I am paranoid so all ports and ip addresses are altered, and -N etc appears before this so ignore that they dont appear. and i added some explanations of what i 'intended' these rules to do, so you dont waste time figuring out where i went wrong : 4 #accepts the vpn over port 1192 -A INPUT -p udp -m udp --dport 1192 -j ACCEPT -A INPUT -j INPUT-FIREWALL -A OUTPUT -j ACCEPT #packets that are to be forwarded from 10.10.1.0 network (all open vpn clients) to the internal network (192.168.5.0) jump to [sic]foward-firewall chain -A FORWARD -s 10.10.1.0/24 -d 192.168.5.0/24 -j FOWARD-FIREWALL #same as above, except for a different internal network -A FORWARD -s 10.10.1.0/24 -d 10.100.5.0/24 -j FOWARD-FIREWALL # reject any not from either of those two ranges -A FORWARD -j REJECT -A INPUT-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT-FIREWALL -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT-FIREWALL -j REJECT -A FOWARD-FIREWALL -m state --state RELATED,ESTABLISHED -j ACCEPT #80 443 and 53 are accepted -A FOWARD-FIREWALL -m tcp -p tcp --dport 80 -j ACCEPT -A FOWARD-FIREWALL -m tcp -p tcp --dport 443 -j ACCEPT #192.168.5.150 = openVPN sever -A FOWARD-FIREWALL -m tcp -p tcp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -m udp -p udp -d 192.168.5.150 --dport 53 -j ACCEPT -A FOWARD-FIREWALL -j REJECT COMMIT now I wait :D

    Read the article

  • Are Plesk server backups useful?

    - by Michael T. Smith
    I'm working for a startup now, and I'm the programmer. Because of our small team size, I'm also handling the server management for now (until we get a dedicated server administrator.) I've never used Plesk before, and the server we're using (a Media Temple Dedicated Virtual server) had it installed when I got here. One of my first jobs was to set up backups: Plesk was already running it's nightly server-wide backups. I created a small script to dump the web app, it's DBs and any assets, tar them, store them, and then copy them to another small server we have (to backup the backups.) But, we're constantly running into hard drive space issues because of the Plesk backups. And I'm wondering, are they useful? If I have the web app and all of it's assets, I could easily enough get another server up and running. Do we need to keep running Plesk's backups? Thoughts?

    Read the article

  • Issues configuring Exchange 2010 as well as SSL problems.

    - by Eric Smith
    Possibly-Relevant Background Info: I've recently moved up from icky shared hosting to a glorious, Remote Desktop-administrated VPS server running Windows Server 2008 R2. Even though I'm only 21 now and a computer science major, I've tried to play with every Windows Server release since '03, just to learn new things. What usually happens is inevitably I'll do something wrong and pretty much ruin the install. You're dealing with an amateur here :) Through the past few months of working with my new server, I've mastered DNS, IIS, got Team Foundation Server running (yay!), and can install all of the other basics like SQL Server and Active Directory. The Problem: Now, these last few weeks I've been trying to install Exchange Server 2010 (SP1). To make a long story short, it took me several attempts, and I even had to get my server wiped just so I could start fresh since Exchange decided uninstalling properly was for sissies (cost me $20, bah). Today, at long last, I got Exchange mostly working. There were two main problems left, however, that left me unsatisfied: Exchange installed itself and all of its child sites into Default Web Site. I wanted to access Exchange via mail.domain.com, but instead everything was configured to domain.com. My limited server admin knowledge was not enough to configure IIS or Exchange to move itself over to the website I had set up for it, appropriately titled 'mail.domain.com', which I had bound to a dedicated IP address (I was told this was necessary, but he may have been wrong). I have two SSL certificates: one for my main domain and one for my mail subdomain. For whatever reason, I had issues geting Exchange to use my mail certificate, even though I had assigned the proper roles in the MMC. I did, at one point, get it to work (or mostly work, anyways. Frankly, my memory of today is clouded by intense frustration). Additionally, I was confused which type of SSL certificate I should be using for Exchange. My SSL provider, GoDaddy, allows me to request a new certificate whenever, so I can use either the certificate request provided by IIS or the more complicated and specific request you can create with Exchange. Which type should I be using, the IIS or Exchange certificate? If I must use the Exchange certificate, will that 1) cause issues when I bind that certificate to my mail.domain.com subdomain or 2) is that an unnecessary step? The SSL Certificate Strikes Back When I thought I had the proper SSL certificate assigned for those brief, sweet moments, Google Chrome reported the correct mail.domain.com certificate when browsing https://mail.domain.com. However, Outlook 2010 threw up an error when trying to configure my email account claiming that the certificate didn't match the domain of "mail.domain.com". Is this an issue that will be resolved by problem #2 or is it a separate one entirely? Apologies for the massive wall of text, but I wanted to provide as much info as I possibly could. Exchange is the last thing I'd like installed on my server, and naturally it's turning out to be the hardest. Thanks for any info at all. Even a point in a vague direction would be a huge help at this point. Thanks! -Eric P.S.: The reason I keep ruining my install is that when I attempt to uninstall Exchange, something invariably goes wrong. The last time the uninstaller complained that there was still a mailbox active and it couldn't proceed until I deleted it. ... The only mailbox left was the Administrator account, the built-in one I couldn't delete. So I attempted to manually uninstall it following several guides online only to now be stuck unable to launch the installer and have to get my system wiped AGAIN for the second time today ($40 down the drain, bah!). I do not understand at all why "uninstall" just can't mean "hey, you, delete everything and go away". There's not even a force uninstall option, only a "recover system" option that just fails to fix anything and makes it so I can't even use the GUI uninstaller. </rant>

    Read the article

  • Sendmail - preventing aliased users from receiving multiple copies of the same email

    - by MikeQ
    Is there any way to prevent a user from receiving multiple copies of the same email if an email is sent to both an alias for the user as well as the user themselves? For example, suppose bob.smith is a included in the alias list for developers (@company.com) If I send the email to both the user and an alias for the user: To: [email protected], [email protected] ... is there any way to prevent user Bob from receiving the same email two times? EDIT: I've observed that if Bob is a member of two different alias groups, and I send an email just to those two groups (not the user directly), sendmail correctly expands the groups and removes the duplicate. The behavior I want to fix occurs when you send directly to the user AND a group they belong to.

    Read the article

  • Nagios3 check_httpname gives 503 response; from command line I get a 200 response

    - by Michael T. Smith
    We're using Nagios to monitor our site (and a bunch of other stuff.) For some odd reason, when I test out the command /usr/lib/nagios/plugins/check_http -H 'domainname.com' the response that comes back is HTTP/1.1 200 OK but when I set up the service to do it: # Check that domain is running define service { hostgroup_name hostgroup service_description host site check_command check_httpname!domainname.com use generic-service notification_interval 1; set > 0 if you want to be renotified } the response that comes back is HTTP/1.1 503 Service Unavailable. Does anyone know why this would be happening?

    Read the article

  • Xml Serialization and the [Obsolete] Attribute

    - by PSteele
    I learned something new today: Starting with .NET 3.5, the XmlSerializer no longer serializes properties that are marked with the Obsolete attribute.  I can’t say that I really agree with this.  Marking something Obsolete is supposed to be something for a developer to deal with in source code.  Once an object is serialized to XML, it becomes data.  I think using the Obsolete attribute as both a compiler flag as well as controlling XML serialization is a bad idea. In this post, I’ll show you how I ran into this and how I got around it. The Setup Let’s start with some make-believe code to demonstrate the issue.  We have a simple data class for storing some information.  We use XML serialization to read and write the data: public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public List<String> Hobbies { get; set; }   public MyData() { this.Hobbies = new List<string>(); } } Now a few simple lines of code to serialize it to XML: static void Main(string[] args) { var data = new MyData {    FirstName = "Zachary", LastName = "Smith", Age = 50, Hobbies = {"Mischief", "Sabotage"}, }; var serializer = new XmlSerializer(typeof (MyData)); serializer.Serialize(Console.Out, data); Console.ReadKey(); } And this is what we see on the console: <?xml version="1.0" encoding="IBM437"?> <MyData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>   The Change So we decided to track the hobbies as a list of strings.  As always, things change and we have more information we need to store per-hobby.  We create a custom “Hobby” object, add a List<Hobby> to our MyData class and we obsolete the old “Hobbies” list to let developers know they shouldn’t use it going forward: public class Hobby { public string Name { get; set; } public int Frequency { get; set; } public int TimesCaught { get; set; }   public override string ToString() { return this.Name; } } public class MyData { public int Age { get; set; } public string FirstName { get; set; } public string LastName { get; set; } [Obsolete("Use HobbyData collection instead.")] public List<String> Hobbies { get; set; } public List<Hobby> HobbyData { get; set; }   public MyData() { this.Hobbies = new List<string>(); this.HobbyData = new List<Hobby>(); } } Here’s the kicker: This serialization is done in another application.  The consumers of the XML will be older clients (clients that expect only a “Hobbies” collection) as well as newer clients (that support the new “HobbyData” collection).  This really shouldn’t be a problem – the obsolete attribute is metadata for .NET compilers.  Unfortunately, the XmlSerializer also looks at the compiler attribute to determine what items to serialize/deserialize.  Here’s an example of our problem: static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); var stream = new StringReader(xml); var data = (MyData) serializer.Deserialize(stream);   if( data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } } If you run the code above, you’ll hit the exception.  Even though the XML contains a “<Hobbies>” node, the obsolete attribute prevents the node from being processed.  This will break old clients that use the new library, but don’t yet access the HobbyData collection. The Fix This fix (in this case), isn’t too painful.  The XmlSerializer exposes events for times when it runs into items (Elements, Attributes, Nodes, etc…) it doesn’t know what to do with.  We can hook in to those events and check and see if we’re getting something that we want to support (like our “Hobbies” node). Here’s a way to read in the old XML data with full support of the new data structure (and keeping the Hobbies collection marked as obsolete): static void Main(string[] args) { var xml = @"<?xml version=""1.0"" encoding=""IBM437""?> <MyData xmlns:xsi=""http://www.w3.org/2001/XMLSchema-instance"" xmlns:xsd=""http://www.w3.org/2001/XMLSchema""> <Age>50</Age> <FirstName>Zachary</FirstName> <LastName>Smith</LastName> <Hobbies> <string>Mischief</string> <string>Sabotage</string> </Hobbies> </MyData>"; var serializer = new XmlSerializer(typeof(MyData)); serializer.UnknownElement += serializer_UnknownElement; var stream = new StringReader(xml); var data = (MyData)serializer.Deserialize(stream);   if (data.Hobbies.Count != 2) { throw new ApplicationException("Hobbies did not deserialize properly"); } }   static void serializer_UnknownElement(object sender, XmlElementEventArgs e) { if( e.Element.Name != "Hobbies") { return; }   var target = (MyData) e.ObjectBeingDeserialized; foreach(XmlElement hobby in e.Element.ChildNodes) { target.Hobbies.Add(hobby.InnerText); target.HobbyData.Add(new Hobby{Name = hobby.InnerText}); } } As you can see, we hook in to the “UnknownElement” event.  Once we determine it’s our “Hobbies” node, we deserialize it ourselves – as well as populating the new HobbyData collection.  In this case, we have a fairly simple solution to a small change in XML layout.  If you make more extensive changes, it would probably be easier to do some custom serialization to support older data. A sample project with all of this code is available from my repository on bitbucket. Technorati Tags: XmlSerializer,Obsolete,.NET

    Read the article

  • Group policy applied to AD OU attributes

    - by Eric Smith
    I'm not well-versed in AD, so would like to resolve a question I have with regards to AD information. I understand that it is possible to apply group policy to OU's, thereby restricting access. What I'd like to know is, is it possible to do the same with OU attributes. Some context would help. There's a requirement to store address information in AD (IMO, a natural fit), but for various reasons, although obviously things like name should be globally accessible, access restrictions are desired on the address. In this case, is it possible to apply security to the address portion of the OU attributes, or does each address have to be broken into a separate OU (a solution that feels smelly given that address doesn't have identity)?

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • How do you add more space to a Fedora (LVM) partition?

    - by Trevor Boyd Smith
    In a nutshell, i have a VM that ran out of space. I increased the size of the VM's harddrive to be 4 times bigger but the OS partition is still only using 1x the space. I need to change the LVM partition to take up the extra 4x space but I don't know how to extend the LVM partition. (NOTE: To make the screenshots given below I had to boot from a live-cd for gnome-partition-manager (aka gparted). Very unfortunately gparted is only able to "detect LVM" and can't do any LVM operations.) Here is what "gparted" shows. Please notice that the "resize" option is not available: The Problem: I can't find good directions<1 on how to grow the LVM partition via GUI or command-line! How do you grow a LVM partition that was created by the default Fedora install? If you are giving command line directions. Please explain what each line of commands does.

    Read the article

  • "Outlook must be online or connected to complete this action" windows XP, outlook 2007, connect to exchange using HTTP

    - by bob franklin smith harriet
    Hey, I can't connect to an exchange server using windows XP and outlook 2007, using the "connect anywhere over HTTP" process, it has been working until recently and the user reports no recent changes to his environment. The error is "Outlook must be online or connected to complete this action" It will prompt me for the username and password which I can enter, then it will give the errorm however this only happens when I delete the account and enter all details for the excahnge server again. The client computer that is unable to connect using outlook can connect to the HTTPS mail service and login send/receive fine. Nobody else has reported issues. making a test environment with a clean install of XP and outlook 2007 gives the same error, but using windows 7 and outlook 2007 connects perfectly fine everytime. I also removed all passwords using control keymgr.dll which didnt help. Any assistance or ideas would be appreciated, at this point nothing I've tried from technet or google works <_<

    Read the article

  • Sending an Email from 2 Mail Servers

    - by Ted Smith
    We are currently attempting to move away from using a "local" mail(exchange) server to an cloud based offering for all our automated emails. The problem is that we send and receive thousands for emails a day and its uptime is quite critical so the business do not want to put all their eggs in one basket, so if we would like to use a cloud based offering(mailgun) they would like a backup if this goes down. So my question is: Would it be possible to set multpile A, TXT and CNAME records to multiple IP address so if one mail server goes down we can automatically start sending emails from the fallover(without them being blocked doing a reverse DNS lookup)? I know we will still need to adjust the MX record for incoming emails but that is acceptable to not receive emails for a short(1-2 hours) of time. Does this make sense?

    Read the article

  • Compiling Assembly Manually [migrated]

    - by John Smith
    I am having trouble with translating a specific line in assembly to machine code for the Nios II. I have successfully compiled these lines: START_TIMER = 0xF68C r0 = 0x0 r8 = 0x8 label = 50000 addi r8, r8, %lo(label) - 01000 01000 1100001101010000 000100 subi r8, r8, 1 - 01000 01000 1111111111111111 000100 bne r8, r0, START_TIMER - 01000 00000 1111011010001100 011110 The line in question that I have trouble with is this one: orhi r8, r0, %hiadj(label) As explained in the handbook linked above, "%lo" means "Extract bits [15..0] of immed32" and "%hiadj" means "Extract bits [31..16] and adds bit 15 of immed32". However, 50000 in binary is 1100001101010000, and is therefore a 16 bit number. As far as I can see, it doesn't contain any bits between 16 and 31. I tried with 0000000000000001, but it's incorrect. What am I doing wrong?

    Read the article

  • data replication from a production web server back to the staging web server

    - by Dennis Smith
    Have two web servers, development/staging and production. Code and some documentation is moved from the staging area to production either through on-demand jobs or nightly via a global replication job. The production server of course sits isolated in a DMZ. There is some content that gets uploaded to the live server that needs to be replicated back to staging. Our security team is locking the network down (and they should) and restricting access to the live server. Best suggestions for replication of "live" data back to "stage" and backing up the live server also.

    Read the article

  • Where do I connect the two red SATA cables on my motherboard?

    - by Dennis Smith
    I have a Compaq Presario PC SR5110NX. The Processor is AMD Athlon 64 3800+. It has 512 MB of RAM and a 40GB Hard Drive. I'm running Windows XP Professional on it. I have 2 SATA drives, one is black and the other is white. I have 2 red little cables, and they have the letters and numbers on them. On one side of the cable it says "HP P/N:5188-2897 0720". My motherboard is a MCP61PM-HM Rev 1.0B. Where do I connect the two SATA connectors?

    Read the article

  • What is good usage scenario for Rackspace Cloud Files CDN (powered by AKAMAI) [closed]

    - by Andrew Smith
    I have just setup my website as static page via Rackspace CDN / Akamai. www.example.co.uk is an alias for d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com. d9771e6f24423091aebc-345678991111238fabcdef6114258d0e1.r61.cf3.rackcdn.com is an alias for a61.rackcdn.com. a61.rackcdn.com is an alias for a61.rackcdn.com.mdc.edgesuite.net. a61.rackcdn.com.mdc.edgesuite.net is an alias for a63.dscg10.akamai.net. a63.dscg10.akamai.net has address 63.166.98.41 a63.dscg10.akamai.net has address 63.166.98.40 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ecb9 a63.dscg10.akamai.net has IPv6 address 2001:428:4c02::cda8:ed09 The HTTP header: HTTP/1.0 200 OK Last-Modified: Fri, 19 Oct 2012 23:27:41 GMT ETag: fdf9e14b77def799e09e8ce815a521da X-Timestamp: 1350689261.23382 Content-Type: text/html X-Trans-Id: tx457979be3bd746c2b4e5403a1189cdbc Cache-Control: public, max-age=900 Expires: Sat, 27 Oct 2012 22:18:56 GMT Date: Sat, 27 Oct 2012 22:03:56 GMT Content-Length: 7124 Connection: keep-alive I am wondering, if it's really the fastest solution to power the website? By investigating it thru http://www.just-ping.com/ it seems, that from many places the ping is very high, and during quick investigation I found that they use GeoIP to resolve addresses based on WHOIS, which is not accurate and because of that from many places the ping is above 300ms (for example, if ISP is in balgladore and request is routed to bangladore even if it's 300ms, for period of 1 month), while by just using Amazon Web Services and Route 53 Anycast DNS servers and only 4 EC2 instances it seems that for example India is always below 100ms, while using Akamai it goes above 300ms in some cases, and this is because Route 53 is using BGP. By quickly checking the Akamai, it seems that they are not getting feedback from the traffic - the high ping stays constant even if I keep downloading large files and videos, which is opposite to what they say on their website. They state, that they optimize the performance by taking feedback from the requests, while it seems they just use GeoIP with per City resolution (which are mostly big cities). Because of this, AWS with Route 53 / Anycast DNS seems to be much more reliable, as well EdgeCast which is using BGP, but I dont know how much does it cost to deploy static website. Actually, I dont know if EdgeCast is not a lie, because from isolated places there are many errors - so their performance is at the cost of quality of delivery, because of BGP switching the routes during transfer of large files. So I was wondering, what is really Akamai good for, because they dont seem to pose any strength in any field in what I do understand now, except they offer some software based WAF on their website, but what I really care about is the core distribiution, so the question is? Is really Akamai good for Videos? For static websites? ??? I found so far AWS most usable with most consistent ping and stable transfers.

    Read the article

  • How do I get Windows 7 wallpaper to display the company logo properly?

    - by David Silva Smith
    Windows 7 is not displaying our company background properly. Curves show pixelation and straight lines are jagged. I'm working with a scalable vector graphics (SVG) image that I've exported to the same resolution (pixel dimensions, to be technical) as the desktop, which is 1440x900. I have tried exporting the image as a .png, .jpg, and .bmp. All of these look correct in an image viewing program, such as Windows Photo Viewer and Paint, but when I set the Windows background to these images, curves show pixelation and straight lines are jagged. Reading online, it seems that behind the scenes, Windows is converting the image to a .jpg with low quality compression, which is causing the issue. I've tried setting the image as a background through Internet Explorer, saving it as a .jpg, and putting the file in the Windows photo directory as suggested in some online forums, but none of those solutions have fixed my issue.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • phpredis + pconnect

    - by john smith
    I am using phpredis on my php based website. The webserver I am using is a the simplest apache apt-get installation, no configuration involved, as this is only a development environment. The issue I am facing is that basically, while using phpredis, there is no difference between "connect" and "pconnect" commands: they both create a new connection everytime, as I can see from the "info" command on redis-cli. Now, I am pretty sure it is because of the apache configuration and the fact that it probably (most likely) is a multi-threaded env, therefore can't enstablish a single connection. My question is basically for when I will turn into production: what would the best choice of a webserver be to avoid this problem? I remember using lighttpd with thousands of users and still get only a very few (like 2 or 3) connections on mongoDB. Any ideas? Thanks in advance.

    Read the article

  • Wacom tablet or Evoluvent Vertical Mouse

    - by Bob Smith
    Having tried a number of mice that didn't help with wrist pain, I am contemplating buying a Wacom tablet or Evoluvent Vertical Mouse. I have heard great things about both of them. What do you recommend for someone with wrist pain in both hands that seems to be getting worse by the day? PS: I work mostly in Windows and Visual Studio environment. I currently have the MS natural ergonomic keyboard. I have started taking regular breaks and am also planning to see a doctor.

    Read the article

  • Create Virtual Image of Laptop before Formatting

    - by Simon Mark Smith
    I have a 3 year old laptop running Windows XP that I used for business. Although I have not used the laptop in over a year, I now want to re-commission it with Windows 7 and a fresh install. Before I do the fresh install I want to create a Virtual Image of the laptop that I can keep and potentially run on my desktop machine should I ever need to access any of the old files/projects that it contains currently. I know that most people will say just copy the files over to your desktop, but my concern is the configuration of the laptop. I used to use it for development and it has older versions of Visual Studio, SQL Server, Active X controls etc, etc than I currently use so I really want to preserve the environment not just the files. So really I am asking what is the best tool-set/method to achieve this? I understand there are free VM tools available but I have never done this before and would appreciate any help.

    Read the article

  • Setting up phpMemCacheAdmin on CentOS 5.5

    - by Bill Smith
    I have been able to setup phpMemCacheAdmin (http://code.google.com/p/phpmemcacheadmin/) on CentOS and am able to view the localhost MemCache statistics however whenever I add other MemCached nodes the config is never changed. I am fairly certain it has something to do with permissions however I am unable to track down what exactly needs to be done, or how to do it. The install was pretty straightforward: wget http://phpmemcacheadmin.googlecode.com/files/phpMemCacheAdmin-1.1.3r161.tar.gz tar xvzf phpMemCacheAdmin-1.1.3r161.tar.gz chmod +w Config/Memcache.ini But, it also states that Apache has rw right in the temp file folder (default : Temp/) and the entire config directory (Config/) -- that is the part I am unsure of. Help!

    Read the article

  • Ubuntu server loses exactly 5 minutes once in a while

    - by Harold Smith
    I noticed that my server, an Ubuntu server 12.04, was losing time. I figured the hardware clock was off or maybe dying due to a faulty CMOS battery. I installed NTP to ensure the drift would be corrected, but to no avail. During a day it would lose 20 minutes or so. To debug, I created a small cron job to check against a remote servers time, which I knew to be correct. The script calculates the difference in seconds between local and remote time. The result was interesting. It seems to be losing exactly 5 minutes several times during the day. Look at this log (difference from remote server noted in seconds): Tue Oct 23 03:30:02 CEST 2012: 284 Tue Oct 23 03:35:02 CEST 2012: 284 Tue Oct 23 03:40:01 CEST 2012: 285 Tue Oct 23 03:45:02 CEST 2012: 285 Tue Oct 23 03:50:02 CEST 2012: 285 Tue Oct 23 03:55:02 CEST 2012: 284 Tue Oct 23 04:00:02 CEST 2012: 284 Tue Oct 23 04:05:01 CEST 2012: 285 Tue Oct 23 04:10:01 CEST 2012: 285 Tue Oct 23 04:15:02 CEST 2012: 585 Tue Oct 23 04:20:02 CEST 2012: 584 Tue Oct 23 04:25:02 CEST 2012: 584 Tue Oct 23 04:30:02 CEST 2012: 584 Tue Oct 23 04:35:01 CEST 2012: 585 Tue Oct 23 04:40:01 CEST 2012: 585 Tue Oct 23 04:45:02 CEST 2012: 585 Tue Oct 23 04:50:02 CEST 2012: 584 Tue Oct 23 04:55:02 CEST 2012: 584 Tue Oct 23 05:00:02 CEST 2012: 584 Tue Oct 23 05:05:01 CEST 2012: 585 Tue Oct 23 05:10:01 CEST 2012: 585 Tue Oct 23 05:15:02 CEST 2012: 585 Tue Oct 23 05:20:02 CEST 2012: 584 Tue Oct 23 05:25:02 CEST 2012: 584 Tue Oct 23 05:30:02 CEST 2012: 584 Tue Oct 23 05:35:01 CEST 2012: 585 Tue Oct 23 05:40:01 CEST 2012: 585 Tue Oct 23 05:45:02 CEST 2012: 584 Tue Oct 23 05:50:02 CEST 2012: 584 Tue Oct 23 05:55:02 CEST 2012: 584 Tue Oct 23 06:00:02 CEST 2012: 584 Tue Oct 23 06:05:03 CEST 2012: 584 Tue Oct 23 06:10:02 CEST 2012: 584 Tue Oct 23 06:15:01 CEST 2012: 585 Tue Oct 23 06:20:02 CEST 2012: 584 Tue Oct 23 06:25:02 CEST 2012: 584 Tue Oct 23 06:30:02 CEST 2012: 584 Tue Oct 23 06:35:02 CEST 2012: 584 Tue Oct 23 06:40:02 CEST 2012: 584 Tue Oct 23 06:45:01 CEST 2012: 585 Tue Oct 23 06:50:02 CEST 2012: 584 Tue Oct 23 06:55:01 CEST 2012: 585 Tue Oct 23 07:00:02 CEST 2012: 584 Tue Oct 23 07:05:02 CEST 2012: 584 Tue Oct 23 07:10:02 CEST 2012: 584 Tue Oct 23 07:15:02 CEST 2012: 584 Tue Oct 23 07:20:02 CEST 2012: 584 Tue Oct 23 07:25:02 CEST 2012: 584 Tue Oct 23 07:30:01 CEST 2012: 585 Tue Oct 23 07:35:02 CEST 2012: 584 Tue Oct 23 07:40:02 CEST 2012: 584 Tue Oct 23 07:45:02 CEST 2012: 584 Tue Oct 23 07:50:02 CEST 2012: 584 Tue Oct 23 07:55:02 CEST 2012: 584 Tue Oct 23 08:00:01 CEST 2012: 585 Tue Oct 23 08:05:02 CEST 2012: 584 Tue Oct 23 08:10:02 CEST 2012: 584 Tue Oct 23 08:15:02 CEST 2012: 584 Tue Oct 23 08:20:02 CEST 2012: 584 Tue Oct 23 08:25:02 CEST 2012: 584 Tue Oct 23 08:30:01 CEST 2012: 585 Tue Oct 23 08:35:02 CEST 2012: 584 Tue Oct 23 08:40:02 CEST 2012: 584 Tue Oct 23 08:45:02 CEST 2012: 584 Tue Oct 23 08:50:02 CEST 2012: 584 Tue Oct 23 08:55:02 CEST 2012: 584 Tue Oct 23 09:00:02 CEST 2012: 584 Tue Oct 23 09:05:03 CEST 2012: 584 Tue Oct 23 09:10:02 CEST 2012: 584 Tue Oct 23 09:15:02 CEST 2012: 584 Tue Oct 23 09:20:02 CEST 2012: 584 Tue Oct 23 09:25:02 CEST 2012: 584 Tue Oct 23 09:30:01 CEST 2012: 584 Tue Oct 23 09:35:02 CEST 2012: 584 Tue Oct 23 09:40:02 CEST 2012: 584 Tue Oct 23 09:45:02 CEST 2012: 584 Tue Oct 23 09:50:02 CEST 2012: 584 Tue Oct 23 09:55:02 CEST 2012: 584 Tue Oct 23 10:00:01 CEST 2012: 584 Tue Oct 23 10:05:02 CEST 2012: 584 Tue Oct 23 10:10:07 CEST 2012: 584 Tue Oct 23 10:15:02 CEST 2012: 584 Tue Oct 23 10:20:02 CEST 2012: 884 Tue Oct 23 10:25:02 CEST 2012: 884 Tue Oct 23 10:30:02 CEST 2012: 883 Tue Oct 23 10:35:01 CEST 2012: 884 Tue Oct 23 10:40:02 CEST 2012: 884 Tue Oct 23 10:45:02 CEST 2012: 884 Tue Oct 23 10:50:02 CEST 2012: 884 Tue Oct 23 10:55:02 CEST 2012: 1184 Tue Oct 23 11:00:02 CEST 2012: 1183 Tue Oct 23 11:05:01 CEST 2012: 1184 Tue Oct 23 11:10:02 CEST 2012: 1184 Tue Oct 23 11:15:02 CEST 2012: 1184 Tue Oct 23 11:20:02 CEST 2012: 1184 This does not seem to be faulty CMOS battery in my opinion. But what do you think?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >