Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 941/1211 | < Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >

  • Netgear WNR1000 WiFi speed

    - by Kamil Klimek
    I have Netgear WNR1000 150N, Macbook Pro 13" with Broadcom BCM43xx 1.0, Network connection 60mbps When I connect through the cable I easily get around 60mbps. When I go through the WiFi it's capable to get only 32mbps at tops. Any ideas why is that? Is that my router limitation or maybe my WiFi card? If it is routers fault what router would you suggest. Best router would be with usb port for external hard drive. Forgot to add screenshot with connection details: Szybkosc transmisji == Transmission speed

    Read the article

  • Firmware/driver for Broadcom wifi card, PowerBook G4 running Ubuntu 12.04 [duplicate]

    - by user107831
    This question already has an answer here: How to Install Broadcom Wireless Drivers 40 answers This is my first time working with Ubuntu (or Linux), so please be patient. I am running Ubuntu 12.04-powerpc "Precise Pangolin" on a Mac PowerBook G4 with 1.67GHz processor. The firmware/driver for the wifi card is missing. For reasons not worth explaining, I cannot physically plug the computer into the network. I have another computer, a MacBook Pro running OSX, from which I can download files and port them by USB thumb drive. The wifi card in the PowerBook G4 is by Broadcom. The chip is BCM4306, rev. 3. The PCI number is 14e4:4302. I have downloaded b43-fwcutter_015-14_powerpc.deb and dropped it into the Home folder on the Ubuntu machine. However, it will not install. When I double-click, it opens with Ubuntu SoftwareCenter, but the "Install" button is inactive: I can't click it. There's a message beside the inactive button saying, "An older version of 'b43-fwcutter' is available in your normal software channels. Only install this file if you trust the origin." If I "right-click" the .deb file and open with Archive Manager, it shows me the "DEBIAN" and "usr" folders, but I'm unsure what to do from there...and fairly certain this is not the right way to do things. Maybe I have the wrong version of b43-fwcutter for my machine/version of Ubuntu? The documentation for this problem is a mess. It refers to all sorts of out-of-date Ubuntu versions and to an array of different "cutter" and firmware files. Maybe I'd be able to figure this out if I were a more seasoned Ubuntu user, but I have no idea why Sofware Center won't let me do the install. I would be VERY grateful for an explanation of how to get the wifi card working on this machine again. Thank you!

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Suggest Wireless AP

    - by sunny
    I'm doing a data and voice install for a client in the hotel industry. I'm done with voice and am looking for my options to provide a Wireless AP. The building's dimensions are 100ft X 50ft. There are a ton of options out there which have left me confused now. Please help me decide. I am not clear as to how I should ensure that the Wireless Network is visible throughout the premises. Personally I would love to setup a WDS on 3-4 linksys wrt54gl routers using OpenWRT. Is this advisable? If not please recommend some other AP's. If a more expensive appliance is absolutely necessary, then please suggest something that can be powered using IEEE 802.3af PoE. Thanks

    Read the article

  • YOUR FREE, EXCLUSIVE, ONLINE UPDATE ON FANTASTIC NEW ORACLE PARTNER OPPORTUNITIES - REGISTER TODAY!

    - by Claudia Costa
    New products. New specializations. New opportunities.There really has never been a better time to be an Oracle partner! Find out exactly what Oracle's "Software. Hardware. Complete" strategy, and the latest developments in the OPN Specialized program, mean for your business.   Register now for the Oracle PartnerNetwork Days Virtual Event on the 29th of June at 11:00h to learn: How to use Oracle's uniquely comprehensive technology stack to grow your business How specialization with Oracle can significantly improve your competitive position How the Oracle PartnerNetwork is evolving to help you succeed Highlights include important updates from Oracle EMEA strategy, partner and product leaders, a live link to the Oracle FY11 Global Partner Kickoff, and interviews with local Oracle partners that are already enjoying the benefits of specialization. The event will also feature: ·         Live Q&A sessions with our speakers, ·         Virtual information booths packed with useful information ·         Opportunities to network with Oracle experts and your peers. ·         Special guest speaker is a former Microsoft executive who has used the principles of specialization with spectacular results to become one of the world's most successful social entrepreneurs. Plus, at the end of the event, you can submit your feedback form for your chance to win two passes to Oracle OpenWorld in San Francisco this September! CLICK HERE TO REGISTER NOW!

    Read the article

  • mounts aren't case-sensitive

    - by Asi
    I mounted a few drives from Linux boxes in my network, but those mounts aren't case-sensitive. The mount command I used ( from the man mount.cifs, case-sensitive should be the default ): mount //10.0.1.10/remote_folder /local_folder -t cifs -o username=xxxx,password=xxxx but those mounts aren't sensitive. for example doing: ls -l /local_folder/testfile.txt ls -l /local_folder/TESTFILE.TXT give's the same result... instead of 'file not found' Couple of important points: All drives are running on Linux machines. My local machine is running Fedora 18 and it is case-sensitive for ANY folder/file expect the mounted drives. All drive/mounts are case-sensitive when when doing SSH. So if I SSH from my local machine to a remote machine, doing ls -l /local_folder/TESTFILE.TXT will say file not found as it should. So I believe the issue is in my local machine and not in the way I did the mount. but I'm not sure where to look next (I'm new to Linux)

    Read the article

  • Allow WRITE access to local folders machine in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Every time I would un-check the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • Migrating an Active Directory domain controller to AWS

    - by Xavier Hutchinson
    I am required to migrate a Active Directory server into AWS with a couple other servers (SQL and IIS) to create a dev and test environment for our network / development. My plan at this time is to simply rebuild the Active Directory server in AWS from scratch - which is quite time consuming indeed! I was wondering if anyone had a recommendation as to a better and more efficient approach of migrating a copy of a physical Active Directory server to the cloud? The server is Windows Server 2012. Thank you!

    Read the article

  • Why can't I access a particular website even though the server appears to be available

    - by 50ndr33
    I can't access http://www.lynda.com/ with any of my browsers on my home network. By checking http://www.downforeveryoneorjustme.com/, I can see that the server is up and I can access it via a proxy like TOR. This screen appears immediately after I type the page in It doesn't even try to load the page, it seems. Though when I ping the server I get this: I tried to do ipconfig /flushdns. But it didn't help either. Anyone know how to fix this?

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • XenServer migrate machines between hosts

    - by Hubert Kario
    I have a XenServer 5.6 Free setup with 5 VMs (Windows and Linux) using about 1.5TB of directly attached storage. Because our virtualisation needs have grown a bit, we currently are preparing a faster XenServer 6.0 Free machine with more RAM and a more storage. Again, directly attached disks. How can I migrate the VMs between XenServer machines? I don't need to keep the machines up and running during migration, but using VM export and import would definitely take too long. Would making a VM with the same configuration on new host and dd'ing the LVM volume over network be the only quick and least painful solution? Are there any "gotchas" I should look out for when doing something like this? The old machine has an AMD Phenom II, the new has Intel Xeon E5 CPUs.

    Read the article

  • Remembering sharepoint password in Internet Explorer 8

    - by enableDeepak
    I am using IE8 to open a sharepoint portal on local network. Initially, I clicked on remember password after passing domain credentials. However, now I want sharepoint to ask credentials again. I've tried many options - Deleted all cookies, IE Security Tab Form Autocomplete Deleted everything. Restarted my machine. And all I could do. Still, when I open portal, sharepoint logs me in automatically. What should I do to make IE ask for credentials again?

    Read the article

  • Cisco T1 Routing Help

    - by Joseph
    Thanks to someone on this site I was able to get the Serial0/0 interface up. I now have: DCD=up DSR=up DTR=up RTS=up CTS=up My next challenge seems to be in the routing and/or PC ip setup. This are the pertinent details from L3: WAN Network: 6.59.186.60/30 Level3 Side: 6.59.186.61 Customer Side: 6.59.186.62 Cust. LAN IPs: 6.59.192.224/27 What would be the IOS commands to setup this route correctly? Am I correct that I would the choose an IP like 6.59.192.224, subnet 255.0.0.0, gateway 6.59.186.62? Thanks

    Read the article

  • Firewalling a Cisco ASA Split tunnel

    - by dunxd
    I have a Cisco ASA 5510 at head office, and Cisco ASA 5505 in remote offices. The remote offices are connected over a split tunnelled VPN - the ASA 5505s use "Easy VPN" Client type VPN in Network Extension Mode (NEM). I'd like to set firewall rules for the non-tunnelled traffic only. Traffic over the VPN to head office should not have any firewall rules applied. I might want to apply different firewall rules to different remote offices. All the documentation I have been able to find assumes the Client VPN is a software endpoint, and all the configuration is done at the 5510. When using a Cisco 5505 as the VPN client, is it possible to configure any firewalling at the Client end, or does it all have to come from the 5510? Are there any other issues to look out for when split-tunnelling a VPN by this method?

    Read the article

  • Windows Firewall failing after 9-12 hours?

    - by routeNpingme
    I have 2 VM servers in the exact same NIC configuration: Server 2003 R2, one NIC connected to private (hardware firewall) network in a 10.x private address space, and one NIC connected straight to public internet. Windows Firewall is enabled for the Public Internet NIC only. Now, what doesn't make sense - this fails generally after 9-12 hours. It's not exact, but once or twice a day, traffic will just stop on the Internet NIC. No event log entries when it happens, and restarting the Windows Firewall service as well as stopping or restarting IPSec Services (just for fun) has no effect. Once the server is rebooted, everything is fine again for another 1/2 day. Any suggestions?

    Read the article

  • hung up troubleshooting packet discards

    - by Chris Satola
    I realize my question is generic, but hopefully someone may have some guidance for me. My network consists of Cisco switches. I am seeing a significant amount (upwards of millions of packets per day) transmit drops between two switches. One being a 3750 and the other a 3560. The peak throughput of this link is only upper 400Mbps, so it shouldn't be a bandwidth issue. At this point, I am sort of clueless where to look or what tools I can use to determine what packets are dropping and why. I can setup a SPAN port on that link and wireshark it, but I don't know if that could tell me anything. Does anyone have any suggestions? Thanks in advance.

    Read the article

  • IoT Wearables

    - by Tom Caldecott-Oracle
    A Reprint from The Java Source Blog By Tori Wieldt on Aug 20, 2014 Wearables are a subset of the Internet of Things that has gained a lot of attention. Wearables can monitor your infant's heartrate, open your front door, or warn you when someone's trying to hack your enterprise network. From Devoxx UK to Oracle OpenWorld to Devoxx4kids, everyone seems to be doing something with wearables.  In this video, John McLear introduces the NFC Ring. It can be used to unlock doors, mobile phones, transfer information and link people. The software for developers is open source, so get coding! If you are coming to JavaOne or Oracle OpenWorld, join us for Dress Code 2.0, a wearables meetup. Put on your best wearables gear and come hang out with the Oracle Applications User Experience team and friends at the OTN Lounge. We'll discuss the finer points of use cases, APIs, integrations, UX design, and fashion and style considerations for wearable tech development. There will be gifts for attendees sporting wearable tech, while supplies last. What: Dress Code 2.0: A Wearables Meetup When: Tuesday, 30-September-2014, 4-6 PM Where: OTN Lounge at Oracle OpenWorld IoT - Wearable Resources The IoT Community on Java.net Wearables in the World of Enterprise Applications? Yep. The Paradox of Wearable Technologies Conference: Wearable Sensors and Electronics (Santa Clara, USA) Devoxx4Kids Workshop for Youth: Wearable tech! (Mountain View, USA)

    Read the article

  • HP Cue-Scanning Flow component freezes

    - by Nathan Fellman
    I am trying to scan with an HP network scanner (actually E6500 all-in-one). Whenever I try to scan, it starts up a flash screen with HP Scanning written all over it, which proceeds to do nothing. Digging in, I found that the process that gets stuck is hpqkygrp.exe, aka "HP CUE-Scanning Flow Component". This happened when I tried scanning from onenote or from the HP Solution Center. However, it seems that scanning from Windows' Fax and Scan utility works fine. As a (probably related) side-note, scanning directly from the scanner (using the buttons on its panel) doesn't work either. How can I keep this process from getting stuck?

    Read the article

  • Help me to set samba and apache on my Ubuntu VM from Vista, starting from ping

    - by avastreg
    Ok the title is not so clear after all, so let's start with the problem description posting some points: i'm on Win Vista i have a Virtual Box Ubuntu 9.04 server (Virtual Machine) installed in windows i'm under Active Directory (maybe helps), with network 192.168.2.x After Ubuntu installation (LAMP), i have: Ubuntu Ip set to 10.0.2.15 (dhcp) Vista pings Ubuntu and Ubuntu pings Vista (only IPs, not names) Can't connect to Apache (default install ubuntu server) at the url h**p://10.0.2.15/ On Ubuntu, testing Apache by doing 'wget http://10.0.2.15/' works Tried to setup samba, writing a share def, but nothing, i can't access from Vista to Ubuntu My scope is: Setting up samba to work on files from windows Reaching apache to test web pages Ok i'm not completely noob (but i'm on the noob way anyway) and i've tried many solutions, so please try to help me; let's look together what went wrong :)

    Read the article

  • CentOS Vs Windows Server 2008

    - by Steve
    Hi, Apologies if the question appears ambiguous, I have little experience in this area and was after some informed opinions. I am deploying a test scenario of a server/client network and need to make some choices for Server. The client will be a Windows system as it meets the requirements for the client, the server choice has more room for selection. From my experience with Linux in general and the appealing nature of open source for low cost, security etc and the availability and performance of database and web server programs I have been considering CentOS as a server choice. I have the ability to make most of the choices of what software / server packages I wish to install. This includes Active Directory (something I have no experience with). How well does this operate with Windows clients? Am I being too selective and creating unnecessary complication by setting out not to use a Windows Server OS?

    Read the article

  • Windows 7 can ping but can't see device on other subnet

    - by user192702
    I have 2 Windows 7 on 2 different subnets but 1 of them is unable to reach a NAS. The topology is as follow. Any idea why this is the case? Is there some Windows settings I need to apply? Subnet 1 - PC 1 - NAS Subnet 2 - PC 2 PC 1 is able to do the following: - Load the admin page on the browser. - Show the NAS under Windows Explorer - Network. - Access the NAS when typed in \\ in Windows Explorer. PC 2 is unable to do any of the above 3. It can however ping the NAS and get a response.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • What You Said: How You Sync and Organize Your Bookmarks

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your favorite techniques for synchronizing and organizing your browser bookmarks. Now we’re back to highlight the most popular techniques, tricks, and services. By far and away, Xmarks was the most frequently mentioned service. For the unfamiliar, Xmarks is a bookmark syncing service that is packed with features. Not only does Xmarks sync bookmarks between browsers and/or computers it also supports iOS, Android, and BlackBerry (mobile integration requires an upgrade to the premium account). In addition to syncing the bookmarks it also integrates with your search results so you can see how other Xmarks users have ranked sites within your search results. Steve-O-Rama highlights one of the many benefits of Xmarks: Xmarks seems to do the job for me. I’ve got a handful of machines, each with three or four browsers; over the years, I’ve accumulated thousands of bookmarks, stretching across many areas of interest. Trying to keep them all straight had been quite a struggle until Xmarks came along. I freaked out when the company was acquired by LastPass, but was subsequently relieved when they continued the free service. Xmarks has a very nice web interface to access, export, search, organize, and do many other things with your bookmarks. In this way, even if I’m on the go, I can access every bookmark I’ve made. Even so, I still make occasional local backups, directly from the browsers to a network folder. Delicious bookmarks, another veteran of the bookmark syncing services, had a fair number of supporters among the HTG readership. Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

  • How do I use an SSH public key from a remote machine?

    - by kubi
    Setup The public keys are set up on a Macbook. I can do a passwordless push to github and a server (iMac) on the local network. The Problem I know the keys are partially setup correctly, because I everything works if I'm sitting at the Macbook. What doesn't work is when I SSH into the Macbook remotely and attempt to push to github or to the iMac server. I'm prompted to input my SSH key passphrase. What am I missing to enable pushing to github from the Macbook while logged in remotely from the iMac?

    Read the article

  • How to netboot ubuntu running iniside VirtualBox on Mac Air

    - by murungu
    Having configured a virtual machine for Ubuntu on VirtualBox on my mac air I need to install Ubuntu OS itself. I have selected the hardrive as the primary boot device and the network as the secondary boot device, so I am not prompted to install an Ubuntu disk at boot time. It attempts to netboot but is unable to locate Ubuntu and cannot find anywhere in the configuration where I can explicitly specify where to find and Ubuntu image, so assume it reverts to some default location and fails. Has anybody out there ever successfully installed ubuntu on virtual box on their Mac Air? What steos did you take to get it right?

    Read the article

< Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >