Search Results

Search found 3701 results on 149 pages for 'cost threshold for parall'.

Page 84/149 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • consulting a network admin for rails and php applications

    - by Karo Devos
    Hi I'm a web developer who writes most of the time rails applications. Next month I'm going to switch from my current VPS to linode. I'm wondering how much it would cost to properly set (or teach me how to do it) everything to get my app up and running. My requirements are probably: nginx/apache, REE/ruby, passenger, full blown php environment, system wide RVM, search engine such a sphinx, being able to perform cronjobs. I have some knowledge of unix and I was able to install everything I needed on my development system. However I had quite a few issues setting up everything on my production server.

    Read the article

  • Need solutions in sharing a 3Mb/768Kbps DSL line to 60+ users and faster bandwidth

    - by elistp
    Two parts. Part 1: We currently have 2 DSL Lines with 3Mb/768Kbps speeds load balanced for 60+ users. Accessing the Internet is borderline unusable. The simple solution would be to get a faster DSL Line but the highest DSL package is 6Mb/768Kbps, has quite the price jump, and doesn't do anything to help with upload speeds. I'm looking for free or extremely low cost solutions (web cache, traffic shaping, bandwidth controls, etc) to help with making Internet access more bearable until the next funding year. Can anyone give any advice? Part 2: We're looking into a 4.5Mb bonded T1 in the next funding year which is of course significantly more expensive than 2 DSL lines. Are bonded T1s our only hope for faster speeds? Are there any better alternatives?

    Read the article

  • Can you set up a Linode VPS as a reseller type scenario?

    - by MAZUMA
    First, I'm a fairly green on this subject. So, go easy. I do web design and development and would like to set up a way to be the central host so to speak for my clients. Currently they are scattered across multiple hosting companies. I've looked at some the cheapy options like a LunarPages and some of the more robust options like RackSpace Cloud and Linode. RackSpace has a reseller program, but it's managed and more at the top end on cost. I'm not seeing any type of reseller program in place for Linode. Can I purchase a higher Linode plan and use Apache2's VirtualHost configuration to handle multiple domains (about 8 domains, but hopefully will grow.). Or, am I not thinking about this correctly?

    Read the article

  • Can I run my OS from a DVD?

    - by Dave D
    I'm thinking of ways to get around the high cost of hard drives lately. I was thinking an optical jukebox would be interesting (though more expensive than just buying a hard drive), then thought I've heard of OS's run from DVD so why not boot from a Blue-ray drive. I think a smaller OS like a linux flavor would work. I'd like to know if there's a way I could burn Windows 7 to DVD for this use. Just curious. Anyone know if this is possible? Thanks, Mac

    Read the article

  • "Countersigning" a CA with openssl

    - by Tom O'Connor
    I'm pretty used to creating the PKI used for x509 authentication for whatever reason, SSL Client Verification being the main reason for doing it. I've just started to dabble with OpenVPN (Which I suppose is doing the same things as Apache would do with the Certificate Authority (CA) certificate) We've got a whole bunch of subdomains, and applicances which currently all present their own self-signed certificates. We're tired of having to accept exceptions in Chrome, and we think it must look pretty rough for our clients having our address bar come up red. For that, I'm comfortable to buy a SSL Wildcard CN=*.mycompany.com. That's no problem. What I don't seem to be able to find out is: Can we have our Internal CA root signed as a child of our wildcard certificate, so that installing that cert into guest devices/browsers/whatever doesn't present anything about an untrusted root? Also, on a bit of a side point, why does the addition of a wildcard double the cost of certificate purchase?

    Read the article

  • Transporting servers - need special rack/case

    - by Nso
    I am responsible for our companys server infrastructure at trade shows. We have 2 annual shows, 1 in Las Vegas and 1 in Amsterdam, so obviously our servers do quite a bit of travelling. Quite often, it gets home with pieces falling off, and insurance/rebuilding takes ages and cost a lot of money. For now I have been using a wooden rack-box, with steel-reinforced sides/corners, but I am looking for something tougher. Does anyone have experience with sending servers all around the world, without them dieing all the time?

    Read the article

  • How to track the touch vector?

    - by mystify
    I need to calculate the direction of dragging a touch, to determine if the user is dragging up the screen, or down the screen. Actually pretty simple, right? But: 1) Finger goes down, you get -touchesBegan:withEvent: called 2) Must wait until finger moves, and -touchesMoved:withEvent: gets called 3) Problem: At this point it's dangerous to tell if the user did drag up or down. My thoughts: Check the time and accumulate calculates vectors until it's secure to tell the direction of touch. Easy? No. Think about it: What if the user holds the finger down for 5 minutes on the same spot, but THEN decides to move up or down? BANG! Your code would fail, because it tried to determine the direction of touch when the finger didn't move really. Problem 2: When the finger goes down and stays at the same spot for a few seconds because the user is a bit in the wind and thinks about what to do now, you'll get a lot of -touchesMoved:withEvent: calls very likely, but with very minor changes in touch location. So my next thought: Do the accumulation in -touchesMoved:withEvent:, but only if a certain threshold of movement has been exceeded. I bet you have some better concepts in place?

    Read the article

  • Running a home mail server using dynamic dns

    - by user4009
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Which SSL certificate to buy [closed]

    - by Sparsh Gupta
    I am reading several notes on SSL certificates and comparison. What matters to me the most is speed. I can read that encryption is same with all different certificates available but I was wondering if there is any difference in the performance of the website with different certificates involved. I am ofcourse interested in end to end response times and I wonder if the type of encryption or number of certificates required as Chain Certificates makes a difference in speed. I dont really care for cost but looking for a good SSL certificate which ideally gives me absolutely no pain and best performance. Recommendations?

    Read the article

  • Windows updates behind a physical firewall with only IP based rules and generic outbound connections are turned off

    - by user125245
    I have some boxes that I do not want to allow any in or outbound traffic to the internet Except for windows updates. However the fire wall in place (Cisco ASA) apparently only supports ip based rules. As best I can tell access to Microsoft updates via anything other then the half dozen URL masks the Microsoft lists as needed does not appear possible. I have kicked around building a full WSUS that I would then manually copy the update files to so that no direct Microsoft access is needed but this sounds very top heavy for the very few boxes involved. I have also kicked around manual updates all around but am not certain how to be conveniently and confidently sure that the correct updates are being applied in the correct order. Any ideas from any direction would be appreciated. I want this as simple / cost effective as possible but have very little flexibility on the only absolutely required internet access policy.

    Read the article

  • Determining the State of a User using their Hostname

    - by PhpMyCoder
    Not sure if this is the right SE site. I figured this question doesn't belong on SO, but if you think it doesn't belong here either, I apologize. I've been looking into determining the location, specifically the state, of a user accessing my website. One of the options I've known about for a while is the GeoIP City Database, however this isn't the most cost effective solution and I'm cheap so I was looking for a less expensive way. Something that occurred to me was that my state was in the public hostname assigned to me by Comcast: (Dash Separated IP).hsd1.ma.comcast.net Could it be possible that other ISPs follow this same pattern of inserting the state abbreviation into their users' hostnames? I've been looking around for a list of hostnames for other ISPs, but I haven't found anything. Can anyone verify that this holds true for other major ISPs?

    Read the article

  • How large administrators team should be? [closed]

    - by Artyom
    I'm trying to find an answer about how many server administrators/technicians are required to run a server farm with 7/24 availability of let's 10, 100, 1000 Linux servers? Are there any studies for this? Edit I was not expected this question to be closed. There are lots of studies about for example software development where from "lines of code" you can approximate the software development cost (COCOMO), so I was searching for something similar in administration. Note, I'm 100% understand that it is not a straightforward or easy to answer question, but it is a real question...

    Read the article

  • Raid-3 like software backup tool

    - by Chronial
    I have a lot of data (about 7 TB), stored across multiple hard-drives with varying sizes. I would like to have a backup of that data to be safe against drive failure. A RAID is not a good option for me, as I want to keep my cost low and be able to easily extend the storage capacity of my setup by buying an additional HD. I remember seeing a piece of software that generates parity data over all drives and stores that on an extra drive. That solution protects the setup from hard drive failure and works with varying drive sizes (as long as the parity drive is the biggest one). But I can’t seem to find that software again. Does anybody now what I’m talking about or have any other solution for my situation?

    Read the article

  • .NET Remoting: Getting underlying socket?

    - by Alan
    Hi, I'm writing a light remoting app to assist in debugging a problem with remoting communication. This app mimics much of what a larger application does: Periodically sends a heartbeat to another peer application, and periodically verifies that a heartbeat has been received within some time threshold. What we're seeing is in our big application, the heartbeats seem to get dropped. One peer will go for long periods of time without seeing heartbeats from another peer, until the peer that is "dead" is restarted. The big application is responsive in all other ways. We believe it has something to do with the network setup. We were able to repro the problem locally, and fixed it by making some configuration changes to our test environment. To help our customer diagnose the issue, the mini-remoting app needs to log as much information as possible. So, is there a way to get the underlying socket for the remoting connection? I'm aware that I could write a custom sink for this, but I'd like to keep the actual remoting process as close to what is implemented in the big app as possible. Also as an aside, any ideas why the big-app might be "dropping" heartbeats?

    Read the article

  • How can I make a non-destructive copy of a (NTFS) partition?

    - by violet313
    I want to recover some deleted files from a healthy NTFS partition on an undamaged hard-disk. In order to leave the partition undisturbed, i plan to use dd to clone the partition to a raw image file & then attempt recovery from that mounted clone. Will dd if=/dev/sd<xn> of=/path/to/output.img perform a non-destructive copy ? Is attempting a restore from a clone using dd the best approach? [edit, wrt Deltiks answer, i need to be a bit clearer about what i'm asking] eg: are there some s/w that can do something more with the original sectors ? eg: if it was a damaged hard-disk i am aware that any kind of read is potentially destructive. but assuming my disk head is not going to suddenly spaz out etc, am i reducing my chances of a successful recovery (at any cost) by using an apparently non-destructive single read of my undamaged hard-disk. (btw: i am planning on using ntfsundelete & testdisk for recovery)

    Read the article

  • Do I have to use a DNS PTR?

    - by JrSysAdmin
    I am currently working on a site in my free time with a few other guys and we are wanting to redirect xxx.com to our new site, yyy.com. So we have xxx.com set to redirect the 216.111.11.1 which is the IP for yyy.com. However, this just says the website is unavailable so it seems as though we need a DNS PTR to redirect 216.111.11.1 to yyy.com. Is there any way to do this without a DNS PTR? The pointer will cost us $15 and it just seems like there should be some better way to go about doing this. Any ideas?

    Read the article

  • In BASH, are wildcard expansions guaranteed to be in order?

    - by ArtB
    Is the expansion of a wildcard in BASH guaranteed to be in alphabetical order? I forced to split a large file into [10Mb pieces][1] so that they can be be accepted by my Mercurial repository. So I was thinking I could use: split -b 10485760 Big.file BigFilePiece. and then in place of: cat BigFile | bigFileProcessor I could do: cat BigFilePiece.* | bigFileProcessor In its place. However, I could not find anywhere that guaranteed that the expansion of the asterisk (aka wildcard, aka '*' ) would always be in alphabetical order so that .aa came before .ab ( as opposed to be timestamp ordering or something like that ). Also, are there any flaws in my plan? How great is the performance cost of cating the file together?

    Read the article

  • Copy an Amazon EC2 Instance to use locally

    - by Excolo
    Ok, so we have a spare server I have installed Debian Wheezy on, and setup Xen on for virtual machines. It has better performance than all our ec2 instances combined, and will cost less to run (for a few various reasons) I would like to get the EC2 instances downloaded to my server, and converted to run for Xen, but im having difficulty finding anything specific. I did not setup the EC2 instances myself, and am not very familiar with them. Everything I have found (which isnt much) just says "Do XYZ" and I have no idea how to do those. So being as specific as possible would be helpful. Also, confusingly I see people writing in forums saying you can only export linux images (which mine are, Ubuntu images) but then I see on amazons export tool saying you can only export Windows server? Am I missing something here? Is that not the right place to be looking? Thanks

    Read the article

  • Converting an Small Business Server to a Workstation

    - by noway
    I am planning to buy a Dell PowerEdge T110 Server and convert it to workstation similar to Dell PowerEdge Precision T1700. The reasoning behind is the cost, if I do it by myself, it costs two times cheaper. However, I wonder what might go wrong in this way? The things I have thought of are: Client OSes are not officially supported. There might be some driver problems. The chasis is designed to be a server case, so there are not many useful inputs in the front side of the case. The server boots slower than usual PCs. What might else be a problem?

    Read the article

  • Visual Studio Development on Virtual Box, Boot Camp, or VMWare Fusion

    - by Eli
    I currently have a Mac, 2ghz and 2 gigs of ram, running OS X Leopard and Virtual Box with a Windows 7 Pro 32bit virtual machine. Performance on the virtual machine is fine for minor tasks but is very clunky while trying to multi-task or develop in Visual Studio 2008. What would be my best option for being able to use Visual Studio, keeping cost and time in mind? 1) Upgrade ram to 4 gigs ($100). Will this really improve my performance enough to use Visual Studio in a Windows 7 vm? Or am I just wasting time/money? 2) Reinstall/restore Windows 7 disk image as a Boot Camp partition. I assume this should improve my performance, yes? 3) Purchase VMWare fusion instead of VirtualBox. Does Fusion require less resources to run? I am open to any suggestions. Thanks in advance

    Read the article

  • How to alter Postgres table data based on its contents?

    - by williamjones
    This is probably a super simple question, but I'm struggling to come up with the right keywords to find it on Google. I have a Postgres table that has among its contents a column of type text named content_type. That stores what type of entry is stored in that row. There are only about 5 different types, and I decided I want to change one of them to display as something else in my application (I had been directly displaying these). It struck me that it's funny that my view is being dictated by my database model, and I decided I would convert the types being stored in my database as strings into integers, and enumerate the possible types in my application with constants that convert them into their display names. That way, if I ever got the urge to change any category names again, I could just change it with one alteration of a constant. I also have the hunch that storing integers might be somewhat more efficient than storing text in the database. First, a quick threshold question of, is this a good idea? Any feedback or anything I missed? Second, and my main question, what's the Postgres command I could enter to make an alteration like this? I'm thinking I could start by renaming the old content_type column to old_content_type and then creating a new integer column content_type. However, what command would look at a row's old_content_type and fill in the new content_type column based off of that?

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • First-class help desk solution? [closed]

    - by Andy Gregory
    A help desk, for larger companies is a place for centralized help within an enterprise to help the users of their products and services.Therefore,to find the solution that best fits your business requirements, it is important to research, examine, and compare help desk software. As far as I know,Hesk is an free helpdesk software with some limited features while h2desk can provide the hosted solution. But for my small business,i just need a web based help desk software which can provide ticket management and knowledge base faq. We need unlimited staff support. Maybe a freeware help desk software can not meet our needs. So,we are willing to pay for effective helpdesk solution. But it should be low cost. We have gotten two choice: iKode Helpdesk Nethelpdesk Our helpdesk team are tend to iKode Helpdesk. Any other efficient first-class help desk solution to share?

    Read the article

  • Get lots of javascript problems when using Opera 11.00 to surf

    - by s hanley
    Sites like ebay and even superuser stop working properly when I use opera 11.00. Menus stop working everywhere from ebay to godaddy. Hovering on a menu item doesn't expand it, no sub menu slides out. This makes a large number of very popular websites unusable. Am I right in assuming this is a javascript issue? I use opera for the turbo feature (I have tested opera with and without turbo so it's not turbo's fault) because I'm on mobile broadband until I get my phone line sorted out. Turbo helps me save money, as well as allowing me to surf at a sane speed. Is there a firefox or chrome equivalent to opera turbo that doesn't cost money? I'm using Opera 11.00, build 1156.

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >