Search Results

Search found 30511 results on 1221 pages for 'linux networking'.

Page 217/1221 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • what firefall linux distro applicance could track internet usage per device in my home?

    - by GregH
    Hello, Anyone know of a community edition/open source/free firewall/gateway software product that I could install onto an old PC to act as my firewall/gateway/proxy etc, BUT for which it has the power to track internet usage per device in my home. So: a) Mandatory - Track internet usage for devices on my home network on a per device basis (e.g. various PCs/Xbox etc) b) Mandatory - Report/graph would would give a breakdown of internet usage, per device (e.g. IP address), per day. c) Desirable - as in b) above but per hour d) Desirable - realtime graph (e.g. 5 minute measurement intervals or something) that shows current internet usage per device e) Mandatory - Handles all internal<=internet requests for all protocols (e.g. HTTP, HTTPS, xbox etc) f) Mandatory - No explicit settings in clients required - i.e. Transparent Monitoring concept (for both HTTP and non-HTTP traffic like xbox, skype etc) g) Mandatory - easy "appliance" like installation onto a dedicated low spec PC thanks in advance

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Ideas for SVN/SQL/PHP/Linux Dev Enviroment Supporting Multiple Isolated Environments?

    - by jpganz18
    I am trying to create a "dev" for my users. In that environment they would access to their own account of PHPMyAdmin, SQL, Subversion and FTP which is not a big problem, but I would like to emulate like if each one would be in their own server. I mean so that they could change the PHP configuration (for example) and would be done only in its own environment. Any idea how to do this? Do I have to make something "special" at the installations of my server or something like that?

    Read the article

  • Networking DOS within Windows 7 XP Mode, with a Windows XP/7 Networked Share

    - by theonlylos
    For awhile now, one of my clients has been stuck with Corel Paradox 4.0 (it used to be the biggest database system in the DOS days, until Microsoft released Access in the early 90's) so for awhile I've managed to keep it on life support on Windows XP for a few years, however since switching to Windows 7 x64, I've had to resort to using XP Mode as the sandbox to keep it up and running. While I am able to run Paradox as usual in XP Mode, I'm having a serious issue where if I try connecting the install to the network share (which is located on the Windows 7 portion of the system), Paradox keeps exiting because it says the serial number is invalid. Now, I know for a fact that this is an issue with the virtual loopback adapter and also having the VM linked to the physical ethernet adapter -- and while I have solved this issue before, most of my fixes have been bandages since after a few weeks the issue pops up again. Long story short, I wanted to ask if there is a permanent way to link a DOS program to a network share address. For example, when I try doing \tsclient\paradox (the Windows 7 Address) I keep getting an error saying I need a valid network address. I've tried mapping that folder to various drive letters such as P:\Paradox -- but for some reason that keeps failing over time. For what it's worth, Paradox uses a .SOM file to store the network settings, however it isn't editable in Notepad but rather it's controlled by a wizard in Paradox. But if that extension rings any bells, I'd welcome any insights.

    Read the article

  • What is the easiest/simplest way to change the HD on a Linux server?

    - by ArmlessJohn
    Hello. I have a machine running Ubuntu Server that has been presenting some HD-related problems. Instead of reinstalling and reconfiguring everything (and to save time) we'd like to copy everything from the current hard drive to a new one and start using it. We only have a single hard drive with a main partition and a swap partition. What tools or methods would you recommend for replacing a hard drive with minimum difficulty and chance of problems? Thank you.

    Read the article

  • How do you add a certificate for WLAN in Linux, at the command-line?

    - by Neil
    I'm using Maemo on a Nokia n810 Internet tablet, and when given a list of installed certificates to choose from when connecting to a PEAP wireless network, it's always blank. I've already installed a couple of certificates through the gui on the device, and only the certificate authorities show up. I've confirmed that Maemo's connection software that handles certificates is buggy, in such a way that certificates are never added, or properly added certificates cannot be found. Is there a way to add WLAN certificates at the command-line, and connect to a wireless network at the command-line as well? I used to use iwconfig to connect, but I never used it with PEAP. Note: I have nothing in /etc/ssl/certs

    Read the article

  • How can I permanently fix my date synchronize problem in linux?

    - by gr33d
    Ubuntu 7.10 server i386 clock/date/time won't stay in sync. Are their log files I can view to tell when the clock changes? For a temporary fix, I created a file in /etc/cron.hourly: #!/bin/sh ntpdate time.nist.gov However, this still leaves a potential hour of unchecked time. Is there a cron.minutely? That would still leave a potential minute of unchecked time. I have read about CMOS battery problems, but what if this does not fix it? I'd like to be able to troubleshoot this as a completely software problem. My squid logs are showing dates back in 2005 when the clock changes, and my time-sensitive access controls are skewed and end up allowing users to surf prohibited websites during business hours.

    Read the article

  • Proper way for changing MAC address in a linux VM?

    - by HappyDeveloper
    I tried to change the MAC address in a ubuntu VM (virtualbox), but after that it threw lots of errors during boot, and then I had no internet connection. Then I saw that the interface was renamed to eth1, so I edited /etc/network/interfaces to change eth0 to eth1, rebooted (didn't know how to restart the network), and boot was now faster and internet worked fine. But now after every time I log in, I get 1 or 2 error messages that say nothing, they only ask me if I want to report them. So I was wondering, is there was a proper way to change MAC address, to avoid these issues?

    Read the article

  • Where is the best location to keep shared-developer website files in the linux hierarchy?

    - by Tchalvak
    I just started hosting files for a website on my server, and I'm not sure where is an appropriate place to keep them. At the moment, I have them in /var/www/name.of.virtualhost.site/www/. That's obviously not secure because anything below the final public /www/ folder is also available since the /var/www/ contents are already being served up. For example, /var/www/name.of.virtualhost.site/docs/site_policies.txt is accessible via something like defaultsite.com/name.of.virtualhost.site/docs/site_policies.txt. So where is a good place to store the files that make up a website? (when it's a site that only I'm developing, I can obviously just stick them in /home/my_username/sites/name.of.virtualhost.site/, but that doesn't work well when I want other developers to be working on the site's files as well) I'm running a LAMP stack, not that I expect it to matter.

    Read the article

  • Setting up a static IP address (public) in Ubuntu

    - by ycseattle
    I have a business class internet connection and need to setup a static ip address for a machine. I did a search online and only find how to setup static local ip addresses (like 192.168..). I tried the same technique, and only setup the ip address and netmask, but after restart networking the computer could not connect to the outside world. This is what I did: 1) edit /etc/network/interfaces iface eth0 inet static address 173.10.xxx.xx netmask 255.255.255.252 2) edit /etc/resolv.conf search wp.comcast.net nameserver xx.xx.xx.xxx nameserver xx.xx.xx.xxx 3) restart network sudo /etc/init.d/networking restart Now the last step didn't report error, ifconfig shows the ip address was set, but this server cannot connect to outside world, ping google.com and reports "unknown host google.com". Any ideas?

    Read the article

  • Removing grub and getting a dual boot of Linux Mint and Win 8.1 working after failed attempt

    - by ThroatOfWinter57
    I gave the details of my problem at reddit: http://www.reddit.com/r/linuxquestions/comments/27qrun/more_specific_questions_about_failed_win_81mint/ tl;dr: I deleted the /, /home, and swap partitions I made for mint after realizing my installation couldn't be booted into and gave the space back to my windows partition. Running boot-repair on my mint live session messed stuff up. Now I can't even boot to my live session usb because grub is left over. Windows 8.1 does work though.

    Read the article

  • Smartcards for storing gpg/ssh keys (Linux) - what do I need?

    - by Ninefingers
    Hi All, I'm interested in storing my SSH keys and gpg keys on a smartcard for added security. However, I'm a bit uncertain on a few points, which are as follows: How many keys can I get on a card? I assume both SSH and GPG can store keys on the card. Is there a limit to key size? I see a lot of cards saying they support 2048-bit keys, what about larger sizes? Hardware: can anyone recommend a card/reader combination that works well? I've done a fair amount of research and it seems PC/SC readers can be a bit iffy - is this your experience? Have I missed anything I should be asking? Are there any other hurdles? I'm aware fsf europe give away cards with membership - I'm not sure I want to join, but... are these cards any good?

    Read the article

  • Is there a maximum of open files per process in Linux?

    - by Malax
    My question is pretty simple and is actually stated in the title. One of my applications throws errors regarding "too many open files" at me, even tho the limit for the user the application runs with is higher than the default of 1024 (lsof -u $USER reports 3000 open fds). Because I cannot imagine why this happens, I guess there might be a maximum per process. Any idea is very appreciated! Edit: Some values that might help... root@Debian-60-squeeze-64-minimal ~ # ulimit -n 100000 root@Debian-60-squeeze-64-minimal ~ # tail -n 4 /etc/security/limits.conf myapp soft nofile 100000 myapp hard nofile 1000000 root soft nofile 100000 root hard nofile 1000000 root@Debian-60-squeeze-64-minimal ~ # lsof -n -u myapp | wc -l 2708

    Read the article

  • Can SATA be used to connect computers?

    - by André
    Can SATA be used to connect two computers together, just like a crossover Ethernet cable would do ? I know SATA has no "networking" features and even though a controller may have multiple ports, the drives don't "see" each other, and that in SATA one device acts as the host (the computer) and the other device is some kind of "client" (the storage drive). But still, did anyone attempt to make a kernel module that would make one computer appear as a "client" (so that the host's SATA controller detects it as a standard hard drive) and then set up like a pseudo-Ethernet link or a very high speed serial link (and then run pppd on it and do networking) ? Note : I know this is an unprofessional and totally stupid idea, I'm just asking out of curiosity.

    Read the article

  • How to get gigabit network speeds on Windows XP?

    - by JB
    We've just installed gigabit switches at work, and things on the Linux side are going well. Our linux boxes, which use a Intel Corporation 82566DM-2 Gigabit nic (according to lspci), consistently get over 900 mbits/sec: iperf -c ipserver ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.40.9 port 39823 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 929 Mbits/sec We have a bunch of Windows XP 64-bit machines that use Broadcom NetXtreme 57xx cards. I spent around a day trying to get equivalent speeds on them, but couldn't get above 200 Mbits/sec. I noticed the Windows iperf tests said that the TCP window size was 8 Kb by default (as opposed to 16 Kb on Linux, so I modified my test to reflect that. Still no love. I went to Broadcom's site, downloaded the latest drivers for the card and installed. Still no love. However, finally, I tried a 64 Kb window size with the new drivers, and finally an improvement! $ iperf -c ipserver -w64k ------------------------------------------------------------ Client connecting to ipserver, TCP port 5001 TCP window size: 64.0 KByte ------------------------------------------------------------ [ 3] local 192.168.40.214 port 1848 connected with 192.168.1.115 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 933 MBytes 782 Mbits/sec Much better, but still not really taking advantage of the full capabilities of the network. If the Linux box can reach 950 Mbits/sec consistently, this box should be able to as well. Also, if you're wondering about the medium, this is over the same cable...I'm switching back and forth. Any suggestion or ideas would be really welcome. Thanks!

    Read the article

  • Hyper-V Cluster with failover - NETWORKING

    - by Adam
    Hi We are looking to setup a 3 node Hyper-V cluster with live migration and failover using: 3 x Dell R710's with dual quad core Xeon & 128 GB RAM & 6 NICs in each 1 x Dell MD 3220i SAN We will be running this setup from a data center and so co-locating our kit. Can anybody explain how we should setup the network connections to make the system redundant? We have looked into this great article but are not sure how to get a 3 server setup correctly and reliably: http://faultbucket.ca/2011/01/hyper-v-failover-cluster-setup/. I believe we need network connections for: live migration, heartbeat, management, hyper-v etc. I assume as we are running it from a DC all IPs will have to be public IPs? Are AD servers will be VMs. One on each Hyper-V server and setup not to be HA.

    Read the article

  • Should root ever own files in my (linux) home directory?

    - by Darren Cook
    This question started off asking why my history file wasn't working properly. Then I noticed it was -rw------- 1 root root and hadn't been updated since 2012-09-11. I changed the ownership, problem fixed. But now I see some other files are owned by root: .gitconfig .pearrc .viminfo Can I safely change them to be owned by my normal user, not root? I'm scratching my head trying to work out if there is a downside, or a security consequence. Losing seven weeks history is actually quite painful, because I lean on it a lot (e.g. to remind how I last did an archive). Would it be reasonable to set up a cron job to email me if it finds any files in my home directory owned by anyone else but me? Rephrased: is there ever a good reason for root to own a file in my home directory?

    Read the article

  • Where are the TweetDeck settings-files located in (ubuntu-) linux?

    - by Philipp Andre
    Hi Everybody, i'm running Windows as well as Ubuntu and like to sync both tweetdeck installations via dropbox. Therefore i need to locate two files: td_26_[username].db preferences_[username].xml I found them on windows under the folder c:\Users[account]\AppData\Roaming\TweetDeckFast.[random string]\Local Store\ But i can't find them on my ubuntu installation. Does anyone know where these files are located? Best Regards Philipp

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >