Search Results

Search found 19018 results on 761 pages for 'indicator network'.

Page 549/761 | < Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >

  • Unable to launch google.com inside Linux Ubuntu

    - by Anuradha
    I have installed Ubuntu Linux VM in my Win XP box. I am able to open http://google.com on Win XP but when I login to, ubuntu linux.. and launch this site: I am getting an error: Server not found. The network settings I have on LinuxUbuntu VM is : Adapter1 : attached to Bridge adapter. I tried NAT as well. But nothing seems to work. I am not in China. I provide google.com as mere example. We have a test website which cannot be launched inside Linux Ubuntu.

    Read the article

  • How to get Atheros ar242x wireless adapter working under Debian Linux?

    - by Mark
    Does anybody know how to get the Atheros ar242x wireless adapter working under Debian Linux (5.0.2 and/or 5.0.3)? My Debian live CDs and install CDs both don't like this card at all. Curisouly, it seems to work on other, Debian-based, Linuxes. Is this a free/non-free Driver issue? I know Debian gets mardy about that. Although for what it's worth, the Live CD doesn't seem to detect my wired LAN connection either... Specifically this is on a Samsung R610 laptop (some version of which seem to have an intel wireless adapter - this one definitely doesn't!) I've tried all sorts of things but obviously on a live CD installing software is limited. I've also tinkerering with network config files and kernel modules etc but to no avail.

    Read the article

  • What is recommended minimum object size for gzip benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this lower/closer to the 150 byte limit... just to save on bandwidth costs, or is there a performance gain in doing so?

    Read the article

  • Fedora vs Ubuntu vs Debian to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs Debian vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora or Debian. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora or Debian? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora or Debian considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

  • Pxe boot ubuntu server - corrupt packages

    - by Stu2000
    I have set up a cobbler pxe boot server and managed to get centos5.8 to fully automatically install. Unfortunately with Ubuntu 12.04-server-i386, it stops mid-way through with a message stating that packages are corrupt. I tried following this tip to unzip the Packages.gz file which results in an empty Packages file with nothing in it. Other people suggested doing a touch command which essentially does the exact same thing, an empty Packages file. That results in me getting a different message that states: Couldn't retrieve dists/precise/restricted/binary-i386/Packages. This may be due to a network..... Does anyone know how to work around this issue? Hitting continue before having made the tip/workaround resulted in ubuntu installing fine, but I need to be able to provide no manual input. Any advice appreciated, Stu

    Read the article

  • Who are the SOA experts? Specialization recognized by customers

    - by Jürgen Kress
    You are looking for the SOA experts to deliver an successful project - contact our Oracle SOA Specialized partners - you can recognize them by the logo, the plaques and in the solutions catalog: Plaques SOA Specialized We would like to offer you a nice SOA Specialization plaque  with your logo to proof your success. If you are a SOA Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website Download Logo: Gold & Platinum Solutions Catalog Please make sure that your Oracle Partner Network administrator will add your achieved Specializations to the Oracle Solutions catalog We started to promote at our website www.oracle.com/soa the find a Specialized Partner who added their Service Oriented Architecture Specialization in the solutions catalog. For administration please visit manage solutions catalog within OPN For detailed tutorial and an faq please visit. http://tinyurl.com/Catalogorcl   For more information on SOA Specialization and special SOA please make sure that you read the SOA & Application Grid Specialization Guide and the SOA & Application Grid Specialization Checklist. Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: SOA Sepecialization,OPN,Oracle,SOA,Jürgen Kress,plaques,solutions catalog

    Read the article

  • P2V using Acronis True Image Home 10 and Windows 7

    - by Anthony
    I have a full system image using Acronis True Image Home 10 and want to run it as a virtual machine on Windows 7 Professional. I have created a virtual machine but Windows Virtual PC doesn't allow access to a USB external hard disk when booting from the Acronis Recovery CD. I've copied the backup onto the host machine and I can access it via the network using the Acronis boot CD but I'm wondering if there is an easier way? Does any other free Virtual Machine software support USB devices during boot (i.e. I can restore a backup image from the USB hard disk directly)

    Read the article

  • Can't find netbooted for Kerrighed pxe boot with Ubuntu Lucid Server

    - by Pengin
    I'm following installation guides for pxe booting and kerrighed. I can't find the package nfsbooted for Ubuntu 10.04. Where did it go? Context: At work I have access to 8 mini-ITX PCs and am trying to build a cluster. My plans include trying Condor, GridGain, Hadoop, and recently Kerrighed has caught my eye. (I reaslise these are all for different kinds of things, I'm just evaluating). Ideally, I'd like to have all the nodes network boot from a single server, since that seems so much easier to manage, plus I can 'borrow' additional PCs for a while without touching their HD. I've been getting on great with Ubuntu Lucid Server (10.04), trying to follow the only guides I can find to get pxe booting (and ultimately kerrighed) to work. This guide is for Ubuntu 8.04 and this one is for Debian. They both refer to a package I can't seem to find, nfsbooted. Has this package been replaced? Am I doing something daft?

    Read the article

  • I Blame SNMP!

    - by brendonpage
    Anyone who has been reading my blog would have noticed that I have deviated slightly from my original post plan! This post was meant to be on uploading files in Silverlight, so what happened you may ask? Well last weekend I had some friends over for a LAN and one of them brought a managed switch with, which he had just been purchased for work. He proceeded show me how cool it was, how he planned on improving his work network and how it can be monitored remotely via SNMP. After this explanation he started to google for a free SNMP graphing tool. After a few hours of hearing disgruntled mutterings from him I asked what was wrong, he proceeded to rant about how he couldn’t find any tools that suited his needs. It was at this point I though the most dangerous thing a programmer can ever think “I wonder how hard it would be to make one”, of course the answer at the time is always “It can’t be that hard”, and so started my journey into SNMP. I am still in the early stages of this journey so I don’t have to much to report yet, but once I have finished the first version of my SNMP graphing tool I will definitely be posting about it! For now if there are any of you who are interested in doing any SNMP development in C# I would recommend looking at the #Sharp project on CodePlex (http://sharpsnmplib.codeplex.com/), it is the SNMP library I have decided to use and thus far it works beautifully.

    Read the article

  • I just ordered 70/10 line, and need a new router I think?

    - by data_jepp
    Before I had the 25/5 line and the n standard router did just fine. Now it doesn't do the job. Online speedtest reads at 82 so I have the line. But my laptop is getting less than 30 in my room. My laptop has the following WiFi card: http://www.intel.com/content/www/us/en/wireless-products/centrino-advanced-n-6205.html What is this talk about 2,5 and 5ghz? Can my laptop be connected at once over both bandwidths? And that would let me use the full 70Mb over wifi? Hope it's ok to ask network questions here.

    Read the article

  • Mercurial mirror: abort: No such file or directory: http://[...]/00manifest.i

    - by Sridhar Ratnakumar
    I am trying to setup a daily mirror of a mercurial repository - code.python.org in particular - within our local network, and serve that via Apache HTTPD. On the remote host that hosts apache, I did this: $ cd /var/www $ hg clone http://code.python.org/hg/trunk/ On my macbook, I ran: $ hg -v clone http://remote/trunk/ (falling back to static-http) abort: No such file or directory: http://remote/trunk/.hg/store/00manifest.i Google does not show any relevant result for this particular error. I remember back in those days being able to setup Bazaar mirrors by a simple clone. Doesn't Mercurial work like that? How do I setup a mirror that must further act like a clone URL?

    Read the article

  • The NEW Oracle Enterprise Manager Extensibility Exchange

    - by Joe Diemer
    Oracle Enterprise Manager continues to expand its Eco-system with the NEW Extensibility Exchange! The Exchange offers a searchable listing of Enterprise Manager entities. Today it’s stocked with plug-ins and connectors for Enterprise Manager 12c and 11g. Anyone - partners, customers, ACE community members, anyone - can post an entity subject to approval of course. So in addition to plug-ins and connectors, the Exchange will have best practices, deployment procedures, templates, and essentially any Enterprise Manager entity that’s relevant. The Exchange provides Development Resources to guide contributors in the creation of plug-ins and connectors. A Community Resources page features plug-ins validated through the Oracle Validate Integration program as well as some other contributions important to customers.  You can also discover ways to get more involved with Enterprise Manager through the user and partner communities. The Exchange was announced in the October 2nd Enterprise Manager Partner Press Release  and is being presented at Oracle OpenWorld 2012 during the following sessions:    •    “Using Oracle Enterprise Manager to Manage Your Own Private Cloud” General Session – Tuesday Oct 2nd    •    “Managing Heterogeneous Environments with Oracle Enterprise Manager” Conference Session – Tuesday Oct 2nd    •    “Using Management Already Built into Oracle Products: Oracle Enterprise Manager” Oracle Partner Network Exchange Session – Wednesday Oct 3rd Check it out at http://www.oracle.com/goto/emextensibility, and let us know what you think by posting a comment below or clicking the "Forum" button at the Exchange itself.

    Read the article

  • High availability for Windows Service under Windows Server 2003

    - by empi
    Hi. I have a following situation: I need to deploy a windows service that listens for incoming request on tcp port (basically WCF service). I have a High Availability requirement - the service must be deployed on two servers and if the service stops (only the service, not the whole server) on one server, all the requests must be redirected to the second one. For me it looks like a basic failover scenario. How can I achieve this on Windows Server 2003? Should I use Microsoft Cluster Service or Network Load Balancing? The important part is that the process of swapping the servers should not concern the clients (the client must see only single address / single host or domain name). Thanks in advance for help.

    Read the article

  • RAID Read/Write Speed Gradually Slows

    - by Nalandial
    This is actually a server at home, but I felt it was sufficiently complicated as to not have it on SuperUser and could easily apply to a professional situation. I have a file server running Debian (Lenny 5.0.4), and it has an XFS LVM on top of a RAID 5 with the OS drive separate from the RAID. It's also running apache, samba, and postgresql. Side note: before anyone asks, I'm using RAID5 because I get more bang for the buck on raw drive space, and still have some fault tolerance. When the box is started (via shutdown or reboot) reading/writing to it's samba share maxes out the gigabit network connection. Over time, this slowly degrades eventually becoming < 10MB/s; however, when rebooted the speed returns to maxing out the connection. Why is this happening, and is there a way to 'clear' out whatever's causing it without taking the server down? Thanks in advance!

    Read the article

  • What's the fastest and automatic way to transfer 2GB of data between 2 PCs every night?

    - by phan
    While it's fast (less than 2 minutes) I hate having to copy files from PC #1 onto a USB stick, and then manually popping it in PC #2 to copy the files to PC #2. Dropbox is too slow in uploading and then downloading 2GBs (synching), it could take hours. Copying 2GBs over the network is also slow because we're dealing with 10,000 little files that totals 2GBs, and not just one, giant 2gb file. Not sure why, but dealing with 10,000 little files makes the copy process much longer. Is there any other method that I'm missing? Any ideas? I'm using Win7 on both PCs. Edit: These files change every single night.

    Read the article

  • Netgear WNR1000 WiFi speed

    - by Kamil Klimek
    I have Netgear WNR1000 150N, Macbook Pro 13" with Broadcom BCM43xx 1.0, Network connection 60mbps When I connect through the cable I easily get around 60mbps. When I go through the WiFi it's capable to get only 32mbps at tops. Any ideas why is that? Is that my router limitation or maybe my WiFi card? If it is routers fault what router would you suggest. Best router would be with usb port for external hard drive. Forgot to add screenshot with connection details: Szybkosc transmisji == Transmission speed

    Read the article

  • SMTP server problem

    - by ram
    Hi, Our requirement is to send weekly newsletters to our website customers. For which we wanted to have local hosted SMTP server in our office. We are not using SMTP server provided by website hosting provider, as we wanted to reduce the network traffic and avoid IP blocking due to bulk mails. We are sending newsletters on weekly basis from our local SMTP server. But due to some reasons, some emails are going to spam and some are not reaching to customers and sometimes there are bounce messages to follow bulk email guidelines (mainly from Gmail). Can you please suggest me, how to achieve my problem. I also wanted to know what type of technology generally Linkedin or banks uses to send notifications emails to all its customers. When they send bulk emails, they will always reach inbox with out any problem. I want the same solution to implement for my website. Please suggest me. Thank you very much in advance.

    Read the article

  • Why is my Wifi connection slower than ethernet even though bandwidth should saturated?

    - by supercheetah
    I'm wondering why it is that my wireless connection is slower than my wired connection for things going to the outside world (so, not files being transferred within the network), which is should be faster than the outside connection, which, I would think, would mean that downloading something like an ISO or other large file from the Internet should be the same either way since that should saturate the connection anyway. Does it have something to do with the encryption (WPA)? Could it have something to do with MTU since the MTU for ethernet can be in the range of 1500 to 9000 bytes, and 2304 bytes for 802.11? Do wireless packets have to be buffered, whereas this wouldn't be an issue with ethernet? What's the math behind the difference?

    Read the article

  • Disabling networkmanager for a specific interface

    - by bdonlan
    I'd like to do some experimentation with hostap without disabling my primary wireless interface. How do I tell networkmanager to keep its hands off a specific interface or interfaces while allowing it to continue managing all other interfaces normally? I'm using Ubuntu 9.04. (Wasn't sure if this should go on superuser or serverfault, as networkmanager isn't much of a 'server' tool - if it belongs on serverfault please feel free to move it) Edit: I've tried adding this to /etc/network/interfaces: allow-hotplug wlan2 iface wlan2 inet static address 192.168.49.1 netmask 255.255.255.0 But this has no apparent effect, even after restarting NetworkManager. Here's my /etc/NetworkManager/nm-system-settings.conf: [main] plugins=ifupdown,keyfile [ifupdown] managed=false Edit[2]: Looks like I needed to restart nm-system-settings, then NetworkManager.

    Read the article

  • DNS on Redhat - rdnc: no server specified and no default

    - by Syahmul Aziz
    Hi all. The error as shown in the 2 pictures below: The configurations for named.conf and the zones files as shown below: After applying "alveso" suggestion below. Now, I think there is no error but I still can't ping my own domain www.p0864868.com (10.0.0.1) nor can I do host or nslookup as shown on previous pictures. PLease assist. Thank you in advance. I also attached my the changes that I made to my named.conf as well as my resolve.conf configs as shown below: progress 2: turned on logging by typping "rndc queylog" The output as below when I pinged p0864868.com progress 3: changed permission of 10-0-0.zone and p086868.zone to 644 named:named Still can't ping www.p0864868.com or execute host command. It says something like network unreachable. I don't understand why it refer to I don't what address is that.

    Read the article

  • Huawei e303c data-card not working for Ubuntu 11.04?

    - by Umashankar
    Cheers to you. I got a problem in making a Mobile-BroadBand connection in Ubuntu 11.04, using 'Huawei e303c' usb data-card. I'm using Tata Docomo 3G sim-card (India, circle: Maharastra). My observations: 1.) I installed the device's driver 'Mobile-Partner For Linux'(which came up with the device). But it is not detecting my device. 2.) In Network Manager, Adding a Mobile-BroadBand connection is not able to detect the device (with or without the device's driver installed). 3.)I tried softwares like usb_modeswitch, gnomeppp, wvdial, sakis3G and followed their guidelines. These too didn't work. 4.) Without the driver, the system is able to identify the device (Mobile-Partner icon comes-up, that leads to driver setup files). But after installing the driver, nothing comes-up there. 5.) In all the above cases, when 'lsusb' cmd is fired, the prompt shows the connected data card (as 'DEVICE_ID:VENDOR_ID Huawei Technologies Ltd.,'). This is my problem. Give a solution to get my device connected. -Umash

    Read the article

  • pip install very slow through virtual box

    - by AJP
    pip install --exists-action=w -r requirements.txt is very very slow through virtual box. Any suggests of how to diagnose and fix? Would seeing the VagrantFile be useful? VirtualBox 4.2.12 (can't upgrade to .14 as it doesn't work.) Vagrant 1.0.7 Host machine: ProductName: Mac OS X ProductVersion: 10.7.5 BuildVersion: 11G63b VagrantFile contains: Vagrant::Config.run do |config| config.vm.box = "precise64" config.vm.customize ["modifyvm", :id, "--memory", 2048] config.vm.box_url = "http://files.vagrantup.com/precise64.box" config.vm.network :hostonly, "33.33.33.21" config.vm.forward_port 5000, 5000 config.vm.forward_port 5555, 5555 config.vm.share_folder "v-root", "/vagrant", "./" Vagrant::Config.run do |config| config.vm.provision :shell, :inline => "VENV=/usr/local/venv bash /vagrant/setup_env.sh" end end Normal download speed is only about 5 times slower at 0.8 Mb per second versus 4 MB per second (as judged by curling a 50 Mb file from S3). But pip install is taking about 20 times longer from Mac (i.e. about 40 minutes) versus 2.

    Read the article

  • cant send using postfix from external ip address

    - by daniel
    i have postfix set up as a satellite to listen on port 587 i can send email outside fine trough the postfix(ubuntu) box from the local network with no problems when i try to connect to the postfix(ubuntu) box from a external ip and send mail it spits back a 554 5.7.1 Relay access denied error i can telnet to it fine, just cant send mail this is my main.cf : smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_use_tls = no myhostname = cotiso-desktop alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mydomainname.com, cotiso-desktop, localhost.localdomain, localhost relayhost = smtp.mydomainname.com mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all there is no security set up yet, i'm just trying to get it working first any ideas? thanks in advance

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Windows Firewall allows connection from any IP regardless of rule that only allow a specific IP

    - by Pierre-Alain Vigeant
    I have configured the Windows Firewall to Block (default) incoming connection on the public profile. I have created a rule for a port (in this case, this is Sql Server) that explicitly states that only my office static IP is allowed. If I test from my office, I am able to connect to the port. I was expecting that anybody outside the office would not be able to connect, but this is not the case. I asked a friend to telnet the port to see if it would reply and it does even if he's not on my network. I am a bit confuse here. Shouldn't it block everybody but the given IP? Is my server completely unsecured?

    Read the article

< Previous Page | 545 546 547 548 549 550 551 552 553 554 555 556  | Next Page >