Search Results

Search found 20116 results on 805 pages for 'network flow'.

Page 595/805 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Do support sites like stackoverflow upset the paid-support open source model?

    - by ajax81
    In order to stay relevant in the marketplace, I'm researching new business models for my software company. The open source model with paid support seems like a good fit for our product, but I have concerns about whether or not a paid support model is viable in an era where top-notch help is readily available for free on sites like those in the StackExchange network. Case in point -- I moved my employees to Ubuntu last year because I didn't want to pay for Win 7 licenses and new hardware (plus, the mono platform was highly attractive). My staff had no Linux experience, but were able to achieve relative competency in about 120 days with the help of AskUbuntu, StackOverflow, and a few "For Dummies" books. We did employ an Ubuntu consultant for 7 days to provide training and support, but beyond that spent $0.00 on any kind of paid expertise. In regards to my due diligence, I ran a 3 month beta of the freemium-paid-support model with one of our smaller customers, and achieved mediocre results. I'd like to think its because our software is so stable and easy to use that the customer didn't need much paid support, but I suspect that they circumvented the terms of our SLA in the same manner that we did with the move to Ubuntu. Does anyone out there has any thoughts, advice, or experience relevant to the move I'm considering? What worked, what didn't, etc? Thanks in advance!

    Read the article

  • Huawei e303c data-card not working for Ubuntu 11.04?

    - by Umashankar
    Cheers to you. I got a problem in making a Mobile-BroadBand connection in Ubuntu 11.04, using 'Huawei e303c' usb data-card. I'm using Tata Docomo 3G sim-card (India, circle: Maharastra). My observations: 1.) I installed the device's driver 'Mobile-Partner For Linux'(which came up with the device). But it is not detecting my device. 2.) In Network Manager, Adding a Mobile-BroadBand connection is not able to detect the device (with or without the device's driver installed). 3.)I tried softwares like usb_modeswitch, gnomeppp, wvdial, sakis3G and followed their guidelines. These too didn't work. 4.) Without the driver, the system is able to identify the device (Mobile-Partner icon comes-up, that leads to driver setup files). But after installing the driver, nothing comes-up there. 5.) In all the above cases, when 'lsusb' cmd is fired, the prompt shows the connected data card (as 'DEVICE_ID:VENDOR_ID Huawei Technologies Ltd.,'). This is my problem. Give a solution to get my device connected. -Umash

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • ZFS: RAIDZ versus stripe with ditto blocks

    - by RandomInsano
    I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops. But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts. Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

    Read the article

  • Send notification from HTTP bot (RESTful service or whatever)

    - by Kuroki Kaze
    I have very simple bot that gathers and parses web pages. It's on a machine in network, behind NAT (so I cannot setup a web server, for example). I don't have MTA set up. The bot should notify me about changes in parsed pages (once in a hour or two, to one recipient). How can this be done? Is there any RESTful email gateways, like SMS ones? I can set up him a twitter account and use curl to post statuses/DM, but it's a very temporary bot.

    Read the article

  • Displaying Nagios on a 52" 1080p screen

    - by gdm
    I'm using a 52" 1080p LCD screen to monitor Nagios, positioned where most of the users can see it. Using the default Nagios web view sort of sucks, since you need to increase the text size a decent amount so it's legible from a distance, and then the "Current Network Status", "Host Status Totals", and other boxes along the top take up the majority of the screen realestate; you can't really see the list of host details. Is there a custom view for Nagios, or a plugin, or something available which is meant to display Nagios details on a large screen with large text?

    Read the article

  • Ubuntu connects to wireless but doesn't work!

    - by UbuntuUser990
    I just installed Ubuntu 14.04 LTS on my laptop, it connects to my wireless network and shows that it is connected, but when I try to load a webpage or do anything like download Steam for Dota 2, it doesn't work. When I try to connect my Facebook account to Ubuntu, it doesn't work. I want to be able to use Ubuntu with working internet access. What do I do?. ifconfig this shows up eth0 Link encap:Ethernet HWaddr 88:ae:1d:60:56:eb inet addr:192.168.1.13 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:24 errors:0 dropped:0 overruns:0 frame:0 TX packets:24 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1856 (1.8 KB) TX bytes:1856 (1.8 KB) When I type ping askubuntu.com this shows up ping: unknown host askubuntu.com

    Read the article

  • Webserver on a rotating server with NAT IP or changing IPs

    - by hpsoftware
    i would have to elaborate my questions so please have patience Explaining the logic. if you are familiar with logmein then it installs a client software on your computer then it kinda keeps tracks where you computer is as long as it's connected to internet. So you can always access your computer no matter where it is whatever it's IP is you just go to logmein.com and then you can just access it. Now what i am asking 1. Let's assume i have a website hosted on my laptop let's call it webserver. so then i move around i have a new IP sometime even on a hotel network is it possible to do something like what logmein does so i can keep moving around my Webserver to new IP but it has some local client or something which keeps updating my IP or something i am sure i would need a gateway server somewhere which is connected to my domain name via DNS so somebody accessing my website www.mywebsite.com goes to my main server then gets routed to my laptop which could be anywhere but my gateway server is able to communicate to my webserver I will keep updating the case description based on comments to make more sense. please have patience with me. Regards

    Read the article

  • WSUS setup on ADS Domain Controller VM

    - by NickC
    How should I setup sharing/directory permissions to allow the following to work: VMHost - Hyper-V partition (not part of domain) ADServer - Active Directory Domain Controller running as guest VM on VMHost \VMHost\Updates disk share for WSUS to put the updates and other local files Problem is what permissions do I need to give to \VMHost\Updates to allow WSUS to work with this directory to avoid "The WSUS content directory is not accessible" error which I am currently seeing. As far as I can tell WSUS runs as "domain/Network Service" account. Question is without adding VMHost to the domain how do I give that user appropriate permissions to this directory? Is there a way that VMHost can be told to trust ADServer and then be able to use users accounts from there? Sort of relates to my other post here: Win 2012 Domain controller VM, should the Hyper-V host be part of the domain? Thanks, Nick

    Read the article

  • What is the aim of this email? Is this a ping/sping? [closed]

    - by mplungjan
    Hi, I received this spam in my catch-all. As a webmaster of the domain it was sent to, I am really curious what the reason for this mail is. It was sent to a non-existent user "tania" on my domain - here I used mydomain.zzz - what do the sender want to achieve? Since many mail servers have stopped backscattering, not getting a bounce would not mean anything, would it? And if this is off topic, where inb the StackExchange WOULD it be on topic? Delivered-To: [email protected] Received: (qmail 8015 invoked from network); 27 Jan 2011 02:32:47 -0000 Received: from unknown (HELO p3pismtp01-021.prod.phx3.secureserver.net) ([10.6.12.26]) (envelope-sender <[email protected]>) by smtp35.prod.mesa1.secureserver.net (qmail-1.03) with SMTP for <[email protected]>; 27 Jan 2011 02:32:47 -0000 X-IronPort-Anti-Spam-Result: At4FAAlnQE1GVjtCVGdsb2JhbACWXo4gCwEWCA0YJLwyhU8EhRc Received: from mx.dt3ls.com ([70.86.59.66]) by p3pismtp01-021.prod.phx3.secureserver.net with ESMTP; 26 Jan 2011 19:32:47 -0700 Received: from 70.86.59.66 by mx.dt3ls.com (Merak 8.9.1) with ASMTP id JXF39710 for <[email protected]>; Wed, 26 Jan 2011 17:31:10 -0500 Return-Path: [email protected] Status: Message-ID: <20110126173109.4d9d6c3f2b@1c3c> From: "Tech Support" <[email protected]> To: <[email protected]> Subject: Information, as instructed. Date: Wed, 26 Jan 2011 17:31:09 -0500 X-Priority: 3 X-Mailer: General-Mailer v.3 MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Quote: I give it to you not that you may remember time, but that you might forget it now and then for a moment and not spend all your breath trying to conquer it. Because no battle is ever won he said. They are not even fought. The field reveals to a man his own folly and despair, and victory is an illusion of philosophers and fools. William Faulkner The Sound and the Fury

    Read the article

  • Browsing not working in Windows 8

    - by Jonathan Perry
    I'm using Windows 8 Professional installed on Windows 7 using the "Save my preferences and apps" installation option. The Windows works great, apps are downloading and I can listen to online radio stations using the TuneIn radio app meaning the internet connection is alive, however, when I open a browser (either Chrome or IE10) and try to browse the internet, I'm getting an "Unable to resolve DNS" error message. Prior to installing the internet browsing worked flawlessly I must say. I'm using ESET NOD32 Antivirus so I suspect that it might interfere with the web connection now, but I'm not so sure. Internet options show that the PC is set to resolve the DNS automatically. I don't know what to do. My other Win7 PCs in my wifi home network are connecting to the internet without any issues. If anyone can help me resolve this I'll be grateful :) Thanks

    Read the article

  • hyper-v server 2003 instance using internet connection sharing disconnects remote desktop to hyper-v host

    - by Joseph Southwell
    I have a windows server 2003 R2 instance running in a hyper-v instance on windows 8. I have setup an internal switch that uses internet connection sharing to get out to the internet. It works fine except for when I try to do windows update on the server 2003 instance it disconnects my remote desktop session to the windows 8 hyper-v host. When I reconnect it says windows update failed. I know that sounds crazy but I have tested it over and over again. If I change the instance to use my external switch (I have an external switch defined on another network adapter) windows update works fine.

    Read the article

  • Firewalling a Cisco ASA Split tunnel

    - by dunxd
    I have a Cisco ASA 5510 at head office, and Cisco ASA 5505 in remote offices. The remote offices are connected over a split tunnelled VPN - the ASA 5505s use "Easy VPN" Client type VPN in Network Extension Mode (NEM). I'd like to set firewall rules for the non-tunnelled traffic only. Traffic over the VPN to head office should not have any firewall rules applied. I might want to apply different firewall rules to different remote offices. All the documentation I have been able to find assumes the Client VPN is a software endpoint, and all the configuration is done at the 5510. When using a Cisco 5505 as the VPN client, is it possible to configure any firewalling at the Client end, or does it all have to come from the 5510? Are there any other issues to look out for when split-tunnelling a VPN by this method?

    Read the article

  • Why does iChat Server keep connecting to proxy.eu.jabber.org?

    - by Tom Hamming
    I have OS X Server 10.6.5 running on a new Mac Mini (server model), serving several functions among which is iChat Server (iChat and Pidgin on Windows as clients). In the iChat log in Server Admin, I kept seeing entries about connecting to proxy.eu.jabber.org. It's for our office network and I wasn't excited about external access to it, so I disabled server-to-server XMPP federation and now the connections just time out. But why is it doing that in the first place? Sample log entry: (datetime) (servername)jabberd/resolver[portnum]: [xmpp-server._tcp.proxy.eu.jabber.org resolved to 208.68.163.220:5269 (300 seconds to live) then: sending dialback auth request for route '(full server hostname)/proxy.eu.jabber.org' A couple minutes later, it comes back with: dialback for outgoing route '(full server hostname)/proxy.eu.jabber.org' timed out

    Read the article

  • Which is prefered internet security + Antivirus solution for Windows, with good detection rate? [clo

    - by metal gear solid
    Possible Duplicate: Free antivirus solutions for Windows Which is the best internet security + Antivirus solution for Windows? free/opensource or commercial it doesn't matter I need best solution. Is Kaspersky best ? or any other? http://www.kaspersky.com/kaspersky_internet_security Award-winning technologies in Kaspersky Internet Security 2010 protect you from cybercrime and a wide range of IT threats: * Viruses, Trojans, worms and other malware, spyware and adware * Rootkits, bootkits and other complex threats * Identity theft by keyloggers, screen capture malware or phishing scams * Botnets and various illegal methods of taking control of your PC or Netbook * Zero-day attacks, new fast emerging and unknown threats * Drive-by download infections, network attacks and intrusions * Unwanted, offensive web content and spam

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Migrating an Active Directory domain controller to AWS

    - by Xavier Hutchinson
    I am required to migrate a Active Directory server into AWS with a couple other servers (SQL and IIS) to create a dev and test environment for our network / development. My plan at this time is to simply rebuild the Active Directory server in AWS from scratch - which is quite time consuming indeed! I was wondering if anyone had a recommendation as to a better and more efficient approach of migrating a copy of a physical Active Directory server to the cloud? The server is Windows Server 2012. Thank you!

    Read the article

  • How to netboot ubuntu running iniside VirtualBox on Mac Air

    - by murungu
    Having configured a virtual machine for Ubuntu on VirtualBox on my mac air I need to install Ubuntu OS itself. I have selected the hardrive as the primary boot device and the network as the secondary boot device, so I am not prompted to install an Ubuntu disk at boot time. It attempts to netboot but is unable to locate Ubuntu and cannot find anywhere in the configuration where I can explicitly specify where to find and Ubuntu image, so assume it reverts to some default location and fails. Has anybody out there ever successfully installed ubuntu on virtual box on their Mac Air? What steos did you take to get it right?

    Read the article

  • HTTP traffic through PIX VPN from outside site

    - by fwrawx
    I have a remote site with a website that only allows access from the outside IP assigned to our local PIX. I have users connecting to the local networking using a VPN that need to be able to view this remote site. I don't think this works because the packets want to come in and go out over the same (ext) interface. So I'm looking for a way to make this work using the PIX or setting up a service on a server on the local network to act as a middle-man for the HTTP requests. The remote site doesn't support setting up a VPN to our PIX. The remote website is dishing out pages over a non-standard port. Can I use squid or something similar to proxy just one site?

    Read the article

  • Home Server restore fails cannot find boot device

    - by Tim Heuer
    I am using Windows Home Server to backup my PCs. I recently had a hard drive failure on one of my WHS connected PCs and obtained an identical sized/speed drive for my laptop. I used the latest home server restore CD and did the restore. It said it completed successfully. Upon reboot, it says 'cannot find boot device' and lists all my drives (hard drive, cd, network book) indicating no valid operating system was found. I boot using the Win7 repair disk and while it doesn't see the operating system, it sees the drive and if I go into a command prompt, I can see all my data on the drive. My laptop is Windows 7 64-bit Ultimate. I've tried most everything I can think of. I'm a technical user (software developer) so I'm pretty aware of how things work (or should). I don't feel like I'm missing a simple step here.

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • What's more cost effective, Hosting your web startup on Foss or Windows?

    - by user37899
    Hi, Not coming from the windows world, I'm confused about licensing. I think my knowledge may be out of date. Before we gave up with windows web servers (IIS 2), we used to have to pay Client Access licence's. This worked out quite expensive. Is it cheaper to host 1000's of users on Windows than use Free open source software tools? http://serverfault.com/questions/124329/network-load-balancing-efficience-and-limits. This post suggests I can pay $15 a month, for unlimited users. I certainly hope that this is unbiased view, I am a professional and use the right technology for the right job. I hope i am not feeling the wrath of a windows (or linux for that matter) fanboy. Perhaps a Microsoft certified Licensing person can clear this up. Should i be recommending to startups windows servers and products over lamp? Cheers!

    Read the article

  • How to Set Up a MongoDB NoSQL Cluster Using Oracle Solaris Zones

    - by Orgad Kimchi
    This article starts with a brief overview of MongoDB and follows with an example of setting up a MongoDB three nodes cluster using Oracle Solaris Zones. The following are benefits of using Oracle Solaris for a MongoDB cluster: • You can add new MongoDB hosts to the cluster in minutes instead of hours using the zone cloning feature. Using Oracle Solaris Zones, you can easily scale out your MongoDB cluster. • In case there is a user error or software error, the Service Management Facility ensures the high availability of each cluster member and ensures that MongoDB replication failover will occur only as a last resort. • You can discover performance issues in minutes versus days by using DTrace, which provides increased operating system observability. DTrace provides a holistic performance overview of the operating system and allows deep performance analysis through cooperation with the built-in MongoDB tools. • ZFS built-in compression provides optimized disk I/O utilization for better I/O performance. In the example presented in this article, all the MongoDB cluster building blocks will be installed using the Oracle Solaris Zones, Service Management Facility, ZFS, and network virtualization technologies. Figure 1 shows the architecture:

    Read the article

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • Windows Virtual PC File Copy from host very slow

    - by Shiv Kumar
    I have a Windows 7 desktop on which I've installed Windows Virtual PC and an instance Windows 7. I also have virtual XP instance on the same host. The problem I am having is that copying files from the host to the virtual machine is dog slow. I'm talking 17KB/sec. The host machine has a gigbit NIC. While using the XP virtual instance to do the same I didn't notice a huge difference but on the Window 7 virtual instance the time is really slowing me down. Is there something I need to do (settings) to fix this? I've attached an image of the Resource monitor (of the virtual Windows 7 instance) that shows my network traffic going in bursts rather than relatively steady. The files are on a "public" folder on my host machine.

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >