Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 234/715 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • How can I make IPv6 on OpenVPN work using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 server side (set as gateway on client) 2001:db8::100:abc:2 tap0 client side 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • Migrating WebLogic 10.3.0 to new host. Slow managed server startup times

    - by wadevondoom
    We are migrating our Blue Martini Commerce application (only supported on WebLogic 10.3.0) to a new host (Redhat 6.3 on a VMWare ESX vm). We are seeing extremely slow start up times for our managed server(s) that is basically 20x slower than our current production. As a for instance the Publish managed server takes ~30 - 45 seconds in current production and in the new environment it takes ~10 minutes. The setup uses the same domain structure and JVM as the current production environment. The same setup files are used. We use jdk1.6.0_33 on 64 bit architecture. We used the generic 64bit weblogic installer and used pack / unpack utilities to migrate the domain. The JAVA_OPTS to start this server are: "-d64 -Xms256m -Xmx512m -XX:PermSize=48m -XX:MaxPermSize=256m" The sysadmins have checked /etc/sysctl.conf and /etc/limits.conf to ensure we were not hitting some kind of process limit. As I am not sure what this managed server does from a Blue Martini perspective during the phase of startup I also had the DBA check to ensure that Oracle RAC (11.2.0.3) wasn't also hitting some kind of process limit or if there was a tns listener issue. The new host is quite a bit stricter with their server lock downs so there are a few differences.... Redhat 6.3 in new env, RH 5.7 in current SElinux is targeted in new env and disabled in current VM in new env and dedicated hardware in current iptables disabled in current. It was enabled in new prod but I had them disable it just in case I apologize for not being more specific. I am mostly hoping got some tips. I do not have the typical root access I would normally have in this environment. I am just hoping got a path forward. I did a few 'kill -3' to see if there are blocked threads and I got nadda. The service works for all intents and purposes it is just painfully slow. Thanks you all in advance for reading and best regards. Wade

    Read the article

  • Web Development - How to access custom host, defined in my hosts file, from another device in the same network

    - by Neara
    Ok, I hope i'll be able to explain the issue im experiencing. I'm working on a project, that has 2 parts: one takes all requests from usual localhost, the other handles requests from myhost.local. While trying to access both addresses from my computer, it works ok. But now i need to test myhost.local on mobile devices, connected to the same network. Usually i would just run server from my computer ip in the network: python manage.py runserver 10.0.0.8:8000 And then from any device, going to 10.0.0.8:8000 would show the project im working on. However, now accessing that ip address routes me straight to localhost. So, my question is how to access myhost.local from another device in same network? I don't want to change router settings, if that can be avoided, cos sometimes i work from places where i can't access router admin. Is there any network settings on my computer, that i can change to fix the routing to myhost.local w.o losing access to localhost as well?

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • Wifi Works with Android and Windows 8 but not Linux and Win 7

    - by eramm
    Support has told me that our company wide wifi network is setup to support mobile phones only. However it doesn't make sense to me that they can identify a mobile device rather they have setup the Access Point to use a protocol that is only supported on Android and Windows phones. Because the Access Point supports Windows mobile this means that laptops running Windows 8 can also connect to the Access Point (proven). So it stands to reason that since Android is based on Linux there must be a way to connect using Linux as well. iwlist shows IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : TKIP CCMP Authentication Suites (1) : 802.1x WIreshark seems to show that a connection is being made to a website to get a certificate and use a Domain Controller for authentication. Questions: 1) what protocol could they be using that is supported on Win Mobile and Android but not on Win 7 and Linux (Debian) ? 2) what tools can I use to help me discover what protocol i need to support ? I have used iwlist and wireshark but I was not able to glean to much useful information from them. I can post the results if needed. 3) is there an app i can use on my Android phone to help me understand what kind of network it is connecting to ? I can provide more information if you tell me how to get it. I just don't know what I am looking for.

    Read the article

  • How to run django on localhost with nginx and uwsgi?

    - by user2426362
    How to run django on localhost with nginx and uwsgi? This im my config but not works. nginx: server { listen 80; server_name localhost; access_log /var/log/nginx/localhost_access.log; error_log /var/log/nginx/localhost_error.log; location / { uwsgi_pass unix:///tmp/localhost.sock; include uwsgi_params; } location /media/ { alias /home/user/projects/zt/myproject/myproject/media/; } location /static/ { alias /home/user/projects/zt/myproject/myproject/static/; } } uwsgi: [uwsgi] vhost = true plugins = python socket = /tmp/localhost.sock master = true enable-threads = true processes = 2 wsgi-file = /home/user/projects/zt/myproject/myproject/wsgi.py virtualenv = /home/user/projects/zt chdir = /home/user/projects/zt/myproject touch-reload = /home/user/projects/zt/myproject/reload This config work on my ubuntu server with normal domain (not localhost) but on localhost not working. If I run localhost in web browser I have Welcome to nginx!

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

  • What are the practical limits on file extension name lengths?

    - by GorillaSandwich
    I started using DOS back before Windows, and ever since have taken it for granted that Every file has a file extension, like .txt, .jpg, etc That extension is always short (usually 3 letters) I learned early that the extension is basically just a hint to the OS as to what the content type is. Eventually I got exposed to Mac and Linux, files with no extensions, etc. And of course I've seen shorter extensions, like .rb and .py. I just noticed that markdown-formatted files can have the extension .markdown, and it made me wonder - how long can that extension be? If I make it .mycrazylongextensiontypewoohoo, will certain operating systems or programs choke on the file? Are extension names generally short just for convenience, or is this based on some limitation, legacy or current?

    Read the article

  • Windows 7 - media sharing extremely flakey (XBox 360)

    - by Nathan Ridley
    I have constant headaches with Windows Media Library sharing, which I'm using to share my video library (a whole lot of AVI files on my computer's hard drive) with the XBox 360. I have the whole setup working, so setup and basic configuration is not an issue. The issue is that Windows Media Library frequently fails to notice additions and modifications to the "watched" folders. Not only that, but often after I've added something new to the library, it fails to transmit this new library information to the 360, as is demonstrated by the fact that, using that 360, I navigate to my video library and the new folders don't show up in the list. Often the only way to get the 360 to see the changes is to exit out of "Videos" on the 360, then remove all videos from Windows Media Library, go back into videos on the 360 to confirm that nothing is shared, then re-add a few videos (but not too many) in Windows Media Library, then go back into the videos in the 360. The whole thing is a real pain in the bum and I really wish I could just add all my videos to Windows Media Library and have it keep watch without me having to kick it, and have the 360 pick up changes without me having to constantly rebuild the library. Any ideas?

    Read the article

  • How can I install .NET framework 3.5 on XP machines without internet connection?

    - by EricSchaefer
    I want to install .NET framework 3.5 on a couple of machines that do not have internet access. If I install the "no internet access"-package it still wants to download something. How can I figure out what is missing? Are there other installation packages? Edit: I would present screenshots but I cannot upload anything from here and the shots would be in german. So I present only the text translated back to english... Installing the "full redistributable package": At the bottom of the license agreement page it display this text: Size of download file: 67 MB Appoximate download time: 2h 44min (56KBit/s) 18min (512KBit/s) It shows the text even if I installed Windows Installer 3.1. After agreeing it displays the "Download and Installation Status"-Dialog with a progress bar labeled "Download:" and Status: Connection to server attempted (try X of 5). Total Download Status: 56MB/67MB I tried it in a VM with no network connection. It tries 5 times while the progress bar shows progress. Later the progress bar is labeled "Installation:". Even later it reports problems during setup and provides two buttons "Send Report Later" and "Don't Send". Now here it comes: "Setup completed" and "Microsoft .NET Framework 3.5 has been deinstalled successfully." (Emphasis is mine) "It is recommended to install current service packs and security updates. More information at Windows Update (link)." Edit2: Installed Service Pack 3, but still no success.

    Read the article

  • How do I tell if a Python pip install is working correctly or hanging?

    - by wobbily_col
    I am trying to install the python pandas package in a virtualenv using pip. On my development machine it installed correctly, but now I am trying on the server, it gets so far then it seems to get stuck: warnings.warn(LapackSrcNotFoundError.__doc__) /apps/PYTHON/2.7.3/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'define_macros' warnings.warn(msg) non-existing path in 'numpy/distutils': 'site.cfg' non-existing path in 'numpy/lib': 'benchmarks' Could not locate executable gfortran Could not locate executable f95 Found executable /apps/modules/wrappers/fortran/ifort Top shows ifort running at 46% cpu. Is there any way I can tell if this is working correctly (can I check files it is updating for example), or if it is stuck in a loop? It has been running for 40 minutes so far.

    Read the article

  • Google Calender sync in Window 7 64bit

    - by user11951
    Hi, Has anyone got the google calender sync (http://www.google.com/support/calendar/bin/answer.py?hl=en&answer=89955) to work on Windows 7 64 bit with Outlook 2007? I have set up 2 way sync but it only seems to sync from my gmail calender to outlook calender. If I make an appointment in my outlook calender, it doesn't show up in gmail calender. I tried running it as admin, but no joy. I also tried running it in compatablity mode (XP service pack 3). But it still gives me an error when i do a manual Sync that says "Could not connect to microsoft outlook. error -2146959355" Thanks in advance.

    Read the article

  • How to verify TRIM/discard on encrypted swap?

    - by svarni
    I am using an encrypted swap partition via ecryptfs-setup-swap on my Ubuntu 13.04 computer using a SSD. I have manually set up trim for my ext4 root partition (simply by adding the "discard" option in /etc/fstab). I also manually ran fstrim on the root partition prior to booting and using dstat I saw that for a few seconds several GB/s of data have been written to the disk. That was presumably the effect of the trim command. These high writerates are reproducable by deleting huge files and have not occured before setting up trim, so I take them as evidence for working trim/discard. Manually enabling trim on my root partition has stopped the wearout of my precious new disk from 365 used reserved blocks (out of 6176 total) within three months down to 0 additional used reserved blocks within three additional months (data from SMART attributes). Because I want to minimize the wearout of my SSD I now would like to know whether my swap partition (which is encrypted using ecryptfs-setup-swap) also makes use of the trim/discard option. I tried sudo swapon -d -v /dev/mapper/cryptswap1 but did not receive particular information ("-v") about whether trim/discard ("-d") was applied. If unsupported, i would expect a message. Then I tried sudo dd if=/dev/sda6 count=1 BS=1M | xxd | less directly after booting and when no swapspace was used but I saw not only zeroes. I assume, when looking at freshly trimmed regions, the disk would send zeroes instead of reading random sectors (and according to some forums, (unencrypted) swap space is trimmed once upon boot). Long story short: Are there any ideas on how to test if trim is effectively used for my encrypted swap? And if not, any ideas on how to - at least manually, for once - trim the whole swap space? I wouldn't want to tinker with the partition itself, because I dont know if it needs to be reinitialized as (encrypted) swap - I dont want to be left with an unbootable system :)

    Read the article

  • Cannot send email to info@ or support@

    - by user3022598
    I am trying to send email from my gmail account to a couple user accounts I have on my new Centos server. The email is setup correctly and I can send receive from accounts ok except info and support. I tried to setup two users "info" and "support" I have a php form that sends out email that works fine for all users except info and support. To test this and make sure that something did not change from yesterday i just created a new user "frank" and tried the submit form and it worked fine. From my gmail account i can email "frank" however i cannot email "info" or "support" The logs I pulled are as follows and i think i see the issue but no idea how to fix it. Aug 15 12:20:55 mail postfix/qmgr[1568]: 1815C20A83: from=, size=1815, nrcpt=1 (queue active) Aug 15 12:20:55 mail postfix/local[2270]: 1815C20A83: to=, relay=local, delay=0.28, delays=0.26/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:17:13 mail postfix/qmgr[1568]: 3C18520A7F: from=, size=1818, nrcpt=1 (queue active) Aug 15 12:17:13 mail postfix/local[2201]: 3C18520A7F: to=, orig_to=, relay=local, delay=0.28, delays=0.25/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) Aug 15 12:15:24 mail postfix/qmgr[1568]: 2F79420A79: from=, size=1813, nrcpt=1 (queue active) Aug 15 12:15:24 mail postfix/local[2155]: 2F79420A79: to=, orig_to=, relay=local, delay=0.29, delays=0.27/0.01/0/0.01, dsn=2.0.0, status=sent (delivered to maildir) For some reason frank goes out fine, however support and info go to root? Why?

    Read the article

  • Dell Vostro Desktop error.

    - by goldenmean
    Faced a strange situation at work today. We have people sitting at a table on opposite sides, with power cables underneath the table. This morning when the person sitting opposite to me banged his feet on ground, it disturbed the power cable to my PC and it turned-off. So when I turned it on, it says, "No boot device found" Press F1 to setup or F5 to perform test... There was no physical impact/crash/fall of the desktop cabinet, which could have crashed/damaged the hard disk physically. EDIT: The OS is Windows 7 So I tried to recognize it in the Bios setup, but even there it could not find the SATA disk that is connected to this machine. So then I opened the cabinet, removed the power supply and plugged it back again. Tried to reboot, same error, "Boot device not found" It is not recognizing the hard disk. Any ideas about what might be wrong? Hard-disk crash, OS Crash(But it doesn't even go to the point of loading the disk after the initial Bios execution, so doubt this...) Any pointers about how I should proceed to troubleshoot/solve this error are welcome.

    Read the article

  • Recovering ZFS pool with errors on import.

    - by Sqeaky
    I have a machine that had some trouble with some bad RAM. After I diagnosed it and removed the offending stick of RAM, The ZFS pool in the machine was trying to access drives by using incorrect device names. I simply exported the pool and re-imported it to correct this. However I am now getting this error. The pool Storage no longer automatically mounts sqeaky@sqeaky-media-server:/$ sudo zpool status no pools available A regular import says its corrupt sqeaky@sqeaky-media-server:/$ sudo zpool import pool: Storage id: 13247750448079582452 state: UNAVAIL status: The pool is formatted using an older on-disk version. action: The pool cannot be imported due to damaged devices or data. config: Storage UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data 805066522130738790 ONLINE sdd3 ONLINE sda3 ONLINE sdc ONLINE A specific import says the vdev configuration is invalid sqeaky@sqeaky-media-server:/$ sudo zpool import Storage cannot import 'Storage': invalid vdev configuration I should have 4 devices in my ZFS pool: /dev/sda3 /dev/sdd3 /dev/sdc /dev/sdb I have no clue what 805066522130738790 is but I plan on investigating further. I am also trying to figure out how to use zdb to get more information about what the pool thinks is going on. For reference This was setup this way, because at the time this machine/pool was setup it needed certain Linux features and booting from ZFS wasn't yet supported in Linux. The partitions sda1 and sdd1 are in a raid 1 for the operating system and sdd2 and sda2 are in a raid1 for the swap. Any clue on how to recover this ZFS pool?

    Read the article

  • Network topology for both direct and routed traffic between two nodes

    - by IndigoFire
    Despite it's small size, this is the most difficult network design problem I've faced. There are three nodes in this network: PC running Windows XP with an internal WiFi adapter.Base station with both WiFi and a Wireless Modem (WiModem)Mobile device with both WiFi and WiModem The modem is a low-bandwidth but high-reliability connection. We'd like to use WiFi for high-bandwidth stuff like file transfers when the mobile is nearby, and the modem for control information. Here's the tricky part: we'd like the wifi traffic to go directly from the mobile to the PC, as rebroadcasting packets on the same WiFi channel takes up double the bandwidth. We can do that with a manual configuration by giving the both the PC and the base station two IP addresses for their WiFi interfaces: one on a subnet shared with the mobile, and one on their own subnet. The routes on the PC are set up so that any traffic going to the mobile via WiModem goes through the secondary IP address so that return traffic from the mobile also goes through the WiModem. Here's what that looks like: PC WiFi 1: 192.168.2.10/24 WiFi 2: 192.168.3.10/24 Default route: 192.168.2.1 Base Station WiFi 1: 192.168.2.1/24 WiFi 2: 192.168.3.1/24 WiModem: 192.168.4.1/24 Mobile WiFi: 192.168.3.20/24 WiModem: 192.168.4.20/24 We'd like to move to having the base station automatically configure the mobile and PC, as the manual setup is problematic when you start having multiple mobiles and PCs. This means that the PC can only have 1 IP address and needs to be treated as being pretty simple. Is it possible to have a setup driven by DHCP on the base station that is efficient with bandwidth?

    Read the article

  • Magical moving desktop icons

    - by Nathan Taylor
    I have encountered a very strange behavior in Windows 7 that I cannot seem to identify and I have never seen or heard of on any system configuration. Whenever I move my mouse to the left-most edge of my primary display (centered in 3-display setup), my desktop icons magically move away from the cursor (up or down and to the right). It only happens when my desktop has focus and the mouse is positioned on the left, top or bottom edge of the main display. Moving the mouse all the way to the right edge of my right secondary display causes the mouse icons to snap back into their correct position. Ridiculous video of the issue My setup is 3 displays on two display adapters. The main display is running at 2560x1600, connected to the machine via a USB-powered DVI-D to DisplayPort adapter and is driven by an NVIDIA NVS 3100M video card. The secondary displays are running at 1440x900 and 1200x1920 and are driven by integrated Intel HD Graphics (mobile). It seems like some kind of panning behavior, but it's obviously not working as expected. I have updated all of my drivers, but no change. It's probably worth noting that the desktop icons are set to auto-arrange.

    Read the article

  • Inbox not updating in Exchange 2010, all users affected

    - by TuxMeister
    I'm battling against this darn issue this morning. We have the following setup: Big Hyper-V machine hosting the servers as VM's VM for CAS: WEB.XXX.local VM for Mailbox: EXC.XXX.local Servers are running Server 2008 R2 with Exchange 2010 SP1 Clients are all running Windows 7 Pro x64 with Outlook 2010 x64 The problem we're having is that nobody is able to see any emails received today (16th of October), but they are able to send externally. When I reply back to the email received externally, I don't get an NDR, yet the user cannot see my email. This is what I found and tried thus far: If we create a subfolder in Outlook 2010 and move any email from the inbox into that folder, changes will be immediately reflected in OWA We've been sending test emails to other users internaly and external email addresses and the sent items folder contains all those tests, synced properly to OWA as well Have tried crating a new profile, new emails are still missing Tried disabling Cache Mode, still no luck Also disabled "Download shared folders", still no luck Tried to setup a brand new Exchange mailbox and configured it on a VM that never had Outlook on it, still the same issue Tried restarting Exchange services on both CAS and Mailbox servers, no luck Tried rebooting both CAS and Mailbox servers, still no luck Performed a Mailbox Discovery on my admin account, emails from today are being found in the Discovery results, so the stuff is there, just not updating the user inboxes Any idea about what this hellish thing can be? I've done everything I can think of and also everything I could find out there. Let me know if you need any more details and thanks for reading this!

    Read the article

  • Monitoring instantaneous network throughput at one second intervals?

    - by Shaddi
    For a testing setup I have, I need to monitor the throughput through a "router"* at regular intervals of around 5 seconds or less (sub-second intervals would be very nice, but not required). Ideally, I would be able to generate a file which contained both the number of bytes and packets seen during each interval. I will eventually be generating a time-series of throughput from this data. On a previous setup using an older version of FreeBSD, there was a tool called "bpfmon" which gave me this information. However, I need to do this under a modern version of Linux (namely, Ubuntu 11.04). I have looked at both iptraf and iftop, but these do not appear to provide the resolution I need, nor do they seem to easily allow scraping the data I need. I understand iptables statistics may be able to give me what I'm after, but the examples I've seen of this seem to rely on repeatedly reading and resetting traffic counters, which seems like it could give inaccurate as read/reset is not an atomic operation. I already capture a tcpdump trace of the traffic I'm interested in on the link I want to monitor, so I am open to approaches which simply parse that. I feel like this must be a common problem though, so I am hoping there will be a standard "best practice" tool for accomplishing this. *I say "router" in quotes because I am really talking about a machine with two bridged NICs through which all the traffic I'm interested in passes.

    Read the article

  • Display stretches 4:3 ratios; Adds scrolling to other ratios

    - by Matt
    I have a dual monitor setup. Normally, they both display at 1680x1050. They have been setup this way for about a year. I'm using Windows XP Professional 2003 x64 SP2. Today, out of nowhere, one of the monitors kicked back to a lower resolution. I was not playing with any configuration at the time.. in fact all I had done was close a window (maybe a browser). But the thing is that the resolution is still preserved partially by the fact that the screen will scroll when you move the mouse. So it's like looking through a 1024x768 window into a 1680x1050 world. The monitor itself does not appear to be damaged, because I also have it connected to my netbook (via KVM) and higher resolutions work fine. I tried uninstalling/reinstalling the drivers to no avail. System restore doesn't help either. I'm unsure of the exact ATI card I'm using.. Device Manager lists it as "Radeon X300/X550/X1050". There is no Catalyst Control Center software installed. I tried to install it, but there doesn't seem to be a way to install it by itself ... it forces you to install another driver, which breaks both of my displays, forcing me to go into safe mode and run system restore again. Any ideas? Thanks EDIT: After playing around more, I discovered that the "scrolling" behavior is only present for aspect ratios that are not 4:3. For 4:3 ratios, it just stretches out to fit the wide screen. My monitor's native ratio is 16:9 .. what could be causing it to think it needs to scroll?

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

  • Mail being sent as root on Ubuntu 14.04

    - by Benjamin Allison
    I'm really struggling with this. I'm trying to set up this server to send mail using Gmail's SMTP. Google keeps bouncing the messages, saying that that Authentication is required: smtp.gmail.com[74.125.196.109]:25: 530-5.5.1 Authentication Required. Learn more at smtp.gmail.com[74.125.196.109]:25: 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 But it seems my server is trying to send mail as [email protected]. I'm baffled. Here's what I've done so far: Updated mail.cf relayhost = [smtp.gmail.com]:587 smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem smtp_use_tls = yes Created /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:password Then did the following: sudo chmod 400 /etc/postfix/sasl_passwd sudo postmap /etc/postfix/sasl_passwd cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart I can't for the life me get a mail message to send, or change the default mail user from [email protected] to [email protected] (FWIW, I'm using Google Apps, that's why it's not a .gmail address).

    Read the article

  • Google Apps bounces bulk emails

    - by znq
    I've an [email protected] email address which receives emails from clients and delivers it to various people within my company. However, since today I get the following bounce error message when sending an email to this address: Delivery to the following recipient failed permanently: [email protected] Technical details of permanent failure: Message rejected by Google Groups. Please visit http://mail.google.com/support/bin/answer.py?hl=en&answer=188131 to review our Bulk Email Senders Guidelines. The Bulk Senders Guidelines describe how to send out bulk emails. However, in my case I only receive one email and distribute it to a couple of people within my company. Same problem applies to the [email protected] email address which we use internally. Does anyone know how to resolve this issue? UPDATE: I just realized that emails coming from the outside and being sent to this address still work. It just seems to be emails coming from my domain. I found a solution and posted it below.

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >