Search Results

Search found 19471 results on 779 pages for 'network troubleshooting'.

Page 568/779 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • Recommened design pattern to handle multiple compression algorithms for a class hierarchy

    - by sgorozco
    For all you OOD experts. What would be the recommended way to model the following scenario? I have a certain class hierarchy similar to the following one: class Base { ... } class Derived1 : Base { ... } class Derived2 : Base { ... } ... Next, I would like to implement different compression/decompression engines for this hierarchy. (I already have code for several strategies that best handle different cases, like file compression, network stream compression, legacy system compression, etc.) I would like the compression strategy to be pluggable and chosen at runtime, however I'm not sure how to handle the class hierarchy. Currently I have a tighly-coupled design that looks like this: interface ICompressor { byte[] Compress(Base instance); } class Strategy1Compressor : ICompressor { byte[] Compress(Base instance) { // Common compression guts for Base class ... // if( instance is Derived1 ) { // Compression guts for Derived1 class } if( instance is Derived2 ) { // Compression guts for Derived2 class } // Additional compression logic to handle other class derivations ... } } As it is, whenever I add a new derived class inheriting from Base, I would have to modify all compression strategies to take into account this new class. Is there a design pattern that allows me to decouple this, and allow me to easily introduce more classes to the Base hierarchy and/or additional compression strategies?

    Read the article

  • cant send using postfix from external ip address

    - by daniel
    i have postfix set up as a satellite to listen on port 587 i can send email outside fine trough the postfix(ubuntu) box from the local network with no problems when i try to connect to the postfix(ubuntu) box from a external ip and send mail it spits back a 554 5.7.1 Relay access denied error i can telnet to it fine, just cant send mail this is my main.cf : smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = smtp_use_tls = no myhostname = cotiso-desktop alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = mydomainname.com, cotiso-desktop, localhost.localdomain, localhost relayhost = smtp.mydomainname.com mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all there is no security set up yet, i'm just trying to get it working first any ideas? thanks in advance

    Read the article

  • Pxe boot ubuntu server - corrupt packages

    - by Stu2000
    I have set up a cobbler pxe boot server and managed to get centos5.8 to fully automatically install. Unfortunately with Ubuntu 12.04-server-i386, it stops mid-way through with a message stating that packages are corrupt. I tried following this tip to unzip the Packages.gz file which results in an empty Packages file with nothing in it. Other people suggested doing a touch command which essentially does the exact same thing, an empty Packages file. That results in me getting a different message that states: Couldn't retrieve dists/precise/restricted/binary-i386/Packages. This may be due to a network..... Does anyone know how to work around this issue? Hitting continue before having made the tip/workaround resulted in ubuntu installing fine, but I need to be able to provide no manual input. Any advice appreciated, Stu

    Read the article

  • Saudi Arabian Retail Distribution Business Ajlan & Bros Selects Oracle Commerce

    - by Marie-Christin Hansen
    Ajlan & Bros has selected Oracle Commerce in a bid to improve its customer engagement capabilities and drive its expansion plans. The large Middle Eastern retail distribution business, which specializes in the design, manufacture and supply of clothing across the Middle East, is seeking to expand its operations, which consist of a distribution network of more than 7,000 points of sale and represent more than 15 international brands. The business is aiming to build brand awareness globally with an interest in the European and American markets. Choosing Oracle Commerce will provide Ajlan & Bros with the capability to optimize each customer engagement, which will help to increase cross-channel promotion and improve a unified online, mobile and social experience for customers. The company will be able to leverage Oracle Commerce’s advanced marketing and personalization capabilities, with enhanced integrated search and content management functionality across its channels. The selection of Oracle Commerce followed an extensive evaluation of competitor solutions, with Oracle selected due to the solutions strong capabilities in cross-channel ecommerce and customer experience management, as well as a solid track record of maintaining best practice. Press release: Ajlan & Bros Selects Oracle Commerce to Support Expansion Strategy

    Read the article

  • OSX Parallels 5 - can't share internet connection when using host-only networking...

    - by Steve Kirtley
    I've just upgraded from Parallels 3 to Parallels 5, but am having a problem matching my previous configuration. I am a web developer so run a local web server on my mac. I used to allow access to this from the virtual machines in Parallels by using 'Host-Only Networking' and then in OSX enabling internet sharing from my wifi/ethernet to the virtual ethernet ports that Parallels created. The setup was based on: http://www.craigfrancis.co.uk/features/setup/parallels/ The new version of Parallels doesn't create any network adaptors that are available for internet sharing in OSX - just VNIC's which only show under an ifconfig... Can anyone suggest how to make this all play nice? Thanks in advance! Steve

    Read the article

  • Wired and wireless conections: force Windows to connect to laptop through Ethernet?

    - by danielkza
    I have a desktop connected to the internet and to my home network through Wi-Fi, and a latptop connected to said desktop through an Ethernet cable. But Windows seems to only reach the laptop through Wi-Fi: I want to transfer files through the wired connection instead. Setting up Internet Connectin Sharing and disconnecting the laptop from Wi-Fi altogether doesn't seem like the most elegant solution to me. I also thought about going to the hosts file and setting up the IP address manually, but that would make the laptop completely unavailable if it's not wired, which happens quite often unfortunately. Is there any way for me to tell Windows to use the wired connection for a particular host if possible, and fallback to any other route it finds otherwise?

    Read the article

  • How to get Atheros ar242x wireless adapter working under Debian Linux?

    - by Mark
    Does anybody know how to get the Atheros ar242x wireless adapter working under Debian Linux (5.0.2 and/or 5.0.3)? My Debian live CDs and install CDs both don't like this card at all. Curisouly, it seems to work on other, Debian-based, Linuxes. Is this a free/non-free Driver issue? I know Debian gets mardy about that. Although for what it's worth, the Live CD doesn't seem to detect my wired LAN connection either... Specifically this is on a Samsung R610 laptop (some version of which seem to have an intel wireless adapter - this one definitely doesn't!) I've tried all sorts of things but obviously on a live CD installing software is limited. I've also tinkerering with network config files and kernel modules etc but to no avail.

    Read the article

  • SQL SERVER AGENT CAN NOT START

    - by Keith Ph?m
    http://imageshack.us/photo/my-images/819/sqlserveragent3.png/ http://imageshack.us/photo/my-images/341/sqlserveragent.png/ I still can not start SQL server agent . When Sql server agent just starts and stop immedietly .I don't know what's happening in the middle .. I already login by the Network Service account in configuration tools and assign member role in msmb database for SQL Server agent. Pls give me an advice I use win 7 ultimate , sql server 2008: Microsoft SQL Server Management Studio 10.0.1600.22 ((SQL_PreRelease).080709-1414 ) Microsoft Data Access Components (MDAC) 6.1.7600.16385 (win7_rtm.090713-1255) Microsoft MSXML 3.0 4.0 5.0 6.0 Microsoft Internet Explorer 9.0.8112.16421 Microsoft .NET Framework 2.0.50727.4971 Operating System 6.1.7600 Thanks

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • NFS share access - Permission denied

    - by rgngl
    I'm trying to share a directory on my NAS device(WD Mybook WE) with NFS to another machine on my local network. The directory on the NAS device looks like this: drwxr-x--- 15 git git 4096 Nov 17 01:05 git/ And id's of the user git on the NAS device is like this: [root@myhost DataVolume]# id git uid=505(git) gid=505(git) I played with many different parameters in the /etc/exports file and this is what I got there currently: /DataVolume/git 192.168.0.20(async,rw,no_root_squash,no_subtree_check) On the client side I have the user git and group git with the same id's to match the ones on the server. user@myclient:~$ id git uid=505(git) gid=505(git) groups=505(git) I mount the directory with: sudo mount myhost:/DataVolume/git -t nfs git/ and the mounted directory looks like: drwxr-x--- 15 git git 4096 Nov 17 01:05 git After these steps I can't seem to cd to that directory with any user, including git and root. I am getting a Permission denied error. Thanks in advance for any help.

    Read the article

  • DPKG errors after upgrade to 12.10

    - by James Wulfe
    So I was doing fine then i upgraded my system to 12.10 and now i cant get my system to update all of its packages properly. no matter what i do, cleaning apt cache, manual install using dpkg, etc, i just cant get them to install. what is happening here and how do i fix this. if i would have thought 12.10 would be this much of a hassle i would have never upgraded..... here is a sampling of the code that returns from "apt-get -f install" Preparing to replace usb-modeswitch-data 20120120-0ubuntu1 (using .../usb-modeswitch-data_20120815-1_all.deb) ... /var/lib/dpkg/info/usb-modeswitch-data.prerm: 4: /var/lib/dpkg/info/usb-modeswitch-data.prerm: dpkg-maintscript-helper: Input/output error dpkg: warning: subprocess old pre-removal script returned error exit status 2 dpkg: trying script from the new package instead ... /var/lib/dpkg/tmp.ci/prerm: 4: /var/lib/dpkg/tmp.ci/prerm: dpkg-maintscript-helper: Input/output error dpkg: error processing /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 2 /var/lib/dpkg/info/usb-modeswitch-data.postinst: 7: /var/lib/dpkg/info/usb-modeswitch-data.postinst: dpkg-maintscript-helper: Input/output error dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/network-manager_0.9.6.0-0ubuntu7_i386.deb /var/cache/apt/archives/pcmciautils_018-8_i386.deb /var/cache/apt/archives/unity-common_6.10.0-0ubuntu2_all.deb /var/cache/apt/archives/whoopsie_0.2.7_i386.deb /var/cache/apt/archives/usb-modeswitch_1.2.3+repack0-1ubuntu3_i386.deb /var/cache/apt/archives/usb-modeswitch-data_20120815-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) It is also just these 6 packages only. no other packages have given me this kind of trouble. well i should say as of now. It was just 5, but them i got an update for unity, and now unity-common is added to the trouble makers. which prevents me from further upgrading the actual unity package as this package is a dependancy.....

    Read the article

  • redirect all youtube video requests to a specific one

    - by iTayb
    I'm on an IT team in my company and I would like to block youtube to users. I don't want to just deny access to the whole youtube domain, but only to replace the .flv/.mp4 request with the one that I want. That way, if someone tries to watch youtube videos on the network, He'll get a video of why using our expensive bandwidth for pleasure is a no-no. I thought about using a packet manipulation program and just replace the video ID with something that I want, but I didn't manage to do it right.

    Read the article

  • Creating a backup - Rsync - Connection refused (111)

    - by pablofiumara
    I am trying to create a backup of my website for free. I just want to have a backup of my website, including not only all files and the configuration but also the databases. I mean, a full backup. If it can be done automatically, it would be better. I feel there are better ways than using the cpanel to achieve that (actually, I believe sometimes web hosters does not have any cpanel). I read the following on how to do it: Automatically mirror the entire contents and configuration of your main server to a secondary backup server on a completely separate network in a different data centre. Use RSync, FXP, cPanel voodoo, or whatever method you wish to automate syncing. That is why I installed Rsync Daemon which is an alternative to SSH for remote backups. I configured it but the test went wrong. The terminal is showing me this: pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share [sudo] password for pablofiumara: rsync: failed to connect to pablofiumara.com (50.87.147.75): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share failed to connect to 50.87.147.7 (50.87.147.7): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] What should I do? Is there a better or easier way to achieve what I wish (I mentioned this in the first paragraph)?

    Read the article

  • I just ordered 70/10 line, and need a new router I think?

    - by data_jepp
    Before I had the 25/5 line and the n standard router did just fine. Now it doesn't do the job. Online speedtest reads at 82 so I have the line. But my laptop is getting less than 30 in my room. My laptop has the following WiFi card: http://www.intel.com/content/www/us/en/wireless-products/centrino-advanced-n-6205.html What is this talk about 2,5 and 5ghz? Can my laptop be connected at once over both bandwidths? And that would let me use the full 70Mb over wifi? Hope it's ok to ask network questions here.

    Read the article

  • Internet is far slower in Ubuntu than Windows 7 on dual-booted machine

    - by Tim
    Edit: I'll leave the original post as-is, but after further investigation, it appears that the problem is something to do with my wi-fi card. Speeds are normal when I connect via cable. Edit 2: Problem was solved. It was something to do with the wireless card drivers. I normally use Windows 7 on my laptop and have internet speeds that are normally about 15-20 Mb/s. I have recently dual-booted with Ubuntu 12.10, and have noticed that internet speeds are drastically slower in Ubuntu. When tested, speeds range from 0.2-2 Mb/s, although occasionally being significantly faster than that or even stopping completely for short periods of time. I've also noticed that when first booting into Ubuntu, speeds start fairly fast, and drop to incredibly slow with a few seconds to a few minutes. There's still some possibility that the issue may be with my ISP, as things seem slower than usual even in Windows, but I suspect that it is related to Ubuntu, as things are far slower in Ubuntu than in Windows. I'm wondering, what could be the cause of this? Potentially relevant information: -I've dual booted before on this machine with earlier versions of Ubuntu (different ISP at the time) with no problem. ISP: Rogers (Major Canadian ISP) System info (Gateway NV53a Laptop): Operating System MS Windows 7 Home Premium 64-bit CPU AMD Phenom II N970 Caspian 45nm Technology RAM 6.00 GB Dual-Channel DDR3 @ 664MHz (9-9-9-24) Motherboard Gateway SJV51_DN (Socket S1G4) Graphics Generic PnP Monitor (1366x768@60Hz) ATI Mobility Radeon HD 4250 (Acer Incorporated [ALI]) Hard Drives 733GB TOSHIBA TOSHIBA MK7559GSXP ATA Device (SATA) Networking info: Connected through Wi-Fi Atheros AR5B97 Wireless Network A

    Read the article

  • How to estimate a server specifications for this particular system? [on hold]

    - by Alvaro Fallas
    I'm working in a college project and I'm supposed to specify the server's hardware to hold a system. The system is some kind of social network. And it is supposed to hold around 100 000 users the first year, also the system must be able to handle 1000 users working at the same time. It is the first time I'm asked to do something like this, so I hope you can give me a hand and help me because I feel a little lost. The system's data base is Mysql. I found some server configurations offered by Amazon Web Services, but I don't know which of them is the better for my system due to lack of experience Hope you can help me.

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • SMTP server problem

    - by ram
    Hi, Our requirement is to send weekly newsletters to our website customers. For which we wanted to have local hosted SMTP server in our office. We are not using SMTP server provided by website hosting provider, as we wanted to reduce the network traffic and avoid IP blocking due to bulk mails. We are sending newsletters on weekly basis from our local SMTP server. But due to some reasons, some emails are going to spam and some are not reaching to customers and sometimes there are bounce messages to follow bulk email guidelines (mainly from Gmail). Can you please suggest me, how to achieve my problem. I also wanted to know what type of technology generally Linkedin or banks uses to send notifications emails to all its customers. When they send bulk emails, they will always reach inbox with out any problem. I want the same solution to implement for my website. Please suggest me. Thank you very much in advance.

    Read the article

  • Why times elapsed connecting to a server are different?

    - by user1634619
    I have a small program which connects to a server of my choice and measures the time elapsed to do so. Each time I run it it returns different result. My question is what does this time depend on ? Network congestion for one. If I choose a server that has multiple addresses e.g. google.com the length of physical link may differ from time to time ? Is it safe to assume that it also affects connection time ? Are there any other factors in place ?

    Read the article

  • High availability for Windows Service under Windows Server 2003

    - by empi
    Hi. I have a following situation: I need to deploy a windows service that listens for incoming request on tcp port (basically WCF service). I have a High Availability requirement - the service must be deployed on two servers and if the service stops (only the service, not the whole server) on one server, all the requests must be redirected to the second one. For me it looks like a basic failover scenario. How can I achieve this on Windows Server 2003? Should I use Microsoft Cluster Service or Network Load Balancing? The important part is that the process of swapping the servers should not concern the clients (the client must see only single address / single host or domain name). Thanks in advance for help.

    Read the article

  • data replication from a production web server back to the staging web server

    - by Dennis Smith
    Have two web servers, development/staging and production. Code and some documentation is moved from the staging area to production either through on-demand jobs or nightly via a global replication job. The production server of course sits isolated in a DMZ. There is some content that gets uploaded to the live server that needs to be replicated back to staging. Our security team is locking the network down (and they should) and restricting access to the live server. Best suggestions for replication of "live" data back to "stage" and backing up the live server also.

    Read the article

  • replacing 3 Cisco Catalyst 4500

    - by hoberion
    Our network supplier recommends replacing our 3 cisco catalyst 4500's because they are EOL and dont speak OSPF (which we really want) Its not my area of expertise so I cant say for sure if we really need to replace these units but for my company the estimated costs of 250K euro is a huge problem. Is there any way to cut down on costs (without moving from cisco devices), I heard the 4500´s can speak ospf but would need an upgrade of sorts? edit: version: IOS (tm) Catalyst 4000 L3 Switch Software (cat4000-I9K91S-M), Version 12.2(20)EW, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1) supervisor: WS-X4013+ Cisco Catalyst 4500 Series Supervisor Engine II-Plus density: WS-X4306-GB Cisco Catalyst 4500 Gigabit Ethernet Module, 6 Ports (GBIC) WS-X4306-GB Cisco Catalyst 4500 Gigabit Ethernet Module, 6 Ports (GBIC) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45) WS-X4548-GB-RJ45 Cisco Catalyst 4500 Enhanced 48-Port 10/100/1000 Module (RJ-45)

    Read the article

  • ZFS: RAIDZ versus stripe with ditto blocks

    - by RandomInsano
    I'm going to build a ZFS file server from FreeBSD. I learned recently that I can't expand a RAIDZ udev once it's part of the pool. That's a problem since I'm a home user and will probably add one disk a year tops. But what if I set copies=3 against my entire pool and just throw individual drives into the pool separated? I've read somewheres that the copies will try and distribute across drives if possible. Is there a guarantee there? I really just want protection from bit rot and drive failure on the cheap. Speed's not an issue since it'll go over a 1Gb network and at MOST stream 720p podcasts. Would my data be guaranteed safe from a single drive failure? Are there things I'm not considering? Any and all input is appreciated.

    Read the article

  • Netgear WNR1000 WiFi speed

    - by Kamil Klimek
    I have Netgear WNR1000 150N, Macbook Pro 13" with Broadcom BCM43xx 1.0, Network connection 60mbps When I connect through the cable I easily get around 60mbps. When I go through the WiFi it's capable to get only 32mbps at tops. Any ideas why is that? Is that my router limitation or maybe my WiFi card? If it is routers fault what router would you suggest. Best router would be with usb port for external hard drive. Forgot to add screenshot with connection details: Szybkosc transmisji == Transmission speed

    Read the article

  • Fedora vs Ubuntu vs Debian to host Subversion and Bugzilla over Apache

    - by Tone
    I'm not interested in a flame war of Ubuntu vs Fedora vs Debian vs whatever. What I am interested in is whether or not I should move my current Ubuntu server to Fedora or Debian. I have been able to get Subversion setup and hosted via Apache over https and it works quite well (I'm a .NET guy so this was all new to me). I'm having trouble though with installing Bugszilla - have run into some issues getting all the perl scripts to run successfully so my questions are: 1) Will Bugszilla will install easier on Fedora or Debian? Can I just install a package instead of having to download the tar.gz file and untar it, run perl scripts, etc. 2) Is Fedora or Debian considered to be a better production server system? I have no desire for a GUI, just need it to host Subversion, Bugzilla over Apache2, and act as a file and print server for my home network.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >