Search Results

Search found 5205 results on 209 pages for 'extra'.

Page 133/209 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • apache2 VirtualHost in Mac OS X home directory

    - by aaron
    I am running Macports apache2 on Mac OS X 10.5. Whenever I configure a virtual host in the default folder, it works, however when I configure the virtual host in my home directory I get a "403 Forbidden" error. How do I configure a vhost in my home directory? Here is the configuration that yields "403 Forbidden" when I access "devel.mysite.com": /opt/local/apache2/conf/extra/httpd-vhosts.conf: DocumentRoot "/opt/local/apache2/htdocs" ServerName * #CustomLog "" common <VirtualHost *:80> #DocumentRoot "/opt/local/apache2/htdocs/mysite" DocumentRoot "/Users/myuser/Sites/mysite" ServerName devel.mysite.com </VirtualHost> The error message in /opt/local/apache2/logs/devel.mysite.com-error_log: [Sat Apr 17 19:54:49 2010] [error] [client 127.0.0.1] client denied by server configuration: /Users/myuser/Sites/mysite/ When I uncomment the line to make DocumentRoot in /opt/local/apache2/htdocs/mysite, it works: DocumentRoot "/opt/local/apache2/htdocs" ServerName * #CustomLog "" common <VirtualHost *:80> DocumentRoot "/opt/local/apache2/htdocs/mysite" #DocumentRoot "/Users/myuser/Sites" ServerName devel.mysite.com </VirtualHost> I get no errors or warnings when I start apache, and the only thing that is logged on startup is this (in /opt/local/apache/logs/error_log): [Sat Apr 17 19:56:29 2010] [notice] Digest: generating secret for digest authentication ... [Sat Apr 17 19:56:29 2010] [notice] Digest: done [Sat Apr 17 19:56:29 2010] [notice] Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8m DAV/2 configured -- resuming normal operations A few notes: * The permissions of /Home/myuser/Sites/mysite is 755, owned by myuser, group is staff * Everything else works as expected, until I move the ServerRoot of the vhost to the directory in my home

    Read the article

  • most reliable linux terminal app / general procedures for process stability

    - by intuited
    I've been using konsole (KDE 4.2) for a while now but it crashed recently. Konsole is efficiently designed to use one instance for all of the windows for your entire X session. Extra-unfortunately, because of this ingenuity the crash brought down all the humpty-dumptys and their bashes and their bashes' applications and all the begattens' begattens all the way down to Jebodiah Springfield into one big flat nonexistent omelette. The fact that this app is capable of crashing under any circumstances is pretty disappointing. Although KDE 4.2 is not expected to be entirely stable -- and yes, I know, I should update my distro -- it's still a no-sell for me, since if at all possible, this sort of thing Shouldn't Happen to something that's likely to be a foundation for an entire working environment. Maybe this is arrogant and unrealistic, but if it's possible to have something more stable, I want it. So other than running under screen -- which is fun, nifty, and thus far flawless in its reliability, but which has some issues with not understanding certain keycodes -- I'm looking for ways to improve my environment's reliability. The most obvious strategy is to cast about for a more reliable console app. A standard featureset -- which to me includes tabbed windows, Unicode support, and a decent level of keyboard shortcut configuration -- is pretty much essential. I'm currently running gnome-terminal and roxterm, both of which have acceptable featuresets (pretty much identical, actually; I think rox is actually the superset), and neither of which have provided me with extensive, objective reliability data. Not that they were expected to. Other strategies are also welcome. Were I responding to this question I would perhaps suggest backgrounding critical tasks with & and/or disowning them so they don't come down with the global pandemic. And stuff like that.

    Read the article

  • MS Access 2003 - Format a Number with Commas AND Auto-Decimal

    - by Emtucifor
    On a report I have a control which is bound to a column which can have up to 3 decimal places. I want the number to format with commas separating thousands and millions, but I also want the number of decimal places to be automatic, so that if there is no decimal portion then no decimal at all is shown. 1234.567 -> 1,234.567 1234.560 -> 1,234.56 1234.500 -> 1,234.5 1234.000 -> 1,234 General format will give me the auto decimal places but no commas. Standard format gives the comma but is fixed to 2 decimal places. Doing my own =Format(Number, "#,##0.#") leaves the decimal point in and doesn't align properly, with extra space on the right of the number. Do I have to write my own VB function to give the format I want? It seems silly that Access (apparently) can't do this out of the box. This also seems really horrible: =Replace(Replace(Replace(Replace(Replace( _ Format(Number, "#,##0.000") & "x", "0x", ""), "0x", ""), "0x, ""), ".x", ""), "x", "")

    Read the article

  • IP address spoofing using Source Routing

    - by iamrohitbanga
    With IP options we can specify the route we want an IP packet to take while connecting to a server. If we know that a particular server provides some extra functionality based on the IP address can we not utilize this by spoofing an IP packet so that the source IP address is the privileged IP address and one of the hosts on the Source Routing is our own. So if the privileged IP address is x1 and server IP address is x2 and my own IP address is x3. I send a packet from x1 to x2 which is supposed to pass through x3. x1 does not actually send the packet. It is just that x2 thinks the packet came from x1 via x3. Now in response if x2 uses the same routing policy (as a matter of courtesy to x1) then all packets would be received by x3. Will the destination typically use the same IP address sequences as specified in the routing header so that packets coming from the server pass through my IP where I can get the required information? Can we not spoof a TCP connection in the above case? Is this attack used in practice?

    Read the article

  • Browsing Audiobooks on an iPod Nano

    - by Electrons_Ahoy
    The sitation: I've got a stack of audiobooks in MP3 format in my iTunes library, and both an iPhone and an iPod Nano. After this question, I've changed the Media Kind for the audiobook MP3s from Music to Audiobook. This has been, overall, spectacular, as now I can resume where I was, they show up under Audiobooks, etc. On the iPhone, it's also super convenient, since the interface shows what used to be an album with multiple songs as a single book with multiple chapters, and going into "Audiobooks" presents me with a list of books, not tracks. The Nano, on the other hand, is a little strange. After changing the media type and re-syncing the ipod, the files in question are now listed under Audiobooks rather than Music, and the extra Audiobook features are present (resume playback and so on,) but the Audiobooks menu just lists all the MP3 tracks on the iPod in alphabetical order, ignoring whichever book/album they belong to - and doesn't seem to let me browse them any other way. This is, clearly, a little sub-optimal. Did I screw something up? How do I get the Nano to treat the files in a similar way to the way the iPhone and iTunes does - as books with chapters? Is there a step I missed somewhere? Do I need to reformat the iPod? Is this even possible? (Footnote: shameless bump, since this just scored me a tumbleweed.)

    Read the article

  • Printer sharing problem (win7 / WinXP): Canon pixma USB printer

    - by Rabarberski
    For a friend, I am trying to share a USB Canon pixma ip3000 printer between two computers in his home network. But I can't get it to work due tot a Canon driver problem. The printer is connected to the Windows 7 (64 bit) computer, and we would like to be able to print from a Windows XP computer. 'Normally' it should be no problem to use Windows printer sharing, however, because one machine is 32-bit and the other is 64-bit, installing an extra driver is required. T he driver provided by canon (here) is described as a 'Canon Inkjet Printer Driver Add-On Module'. The problem is that the .inf file contained in the .exe file isn't accepted as a driver when prompted by the Printer Sharing Wizard, I suspect because it is an add-on driver (whatever that may be). I've connected and installed the printer locally on the XP machine first (which works), so that the XP machine would already know the driver when using it as a network printer, but that doesn't work; the wizard still wants a driver file. Anybody suggestions how to get this working? Maybe there is some sort of generic driver (would be OK even with limited functionality)?

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • MAMP - Host name changes to first vhost SSL entry for project with two localhosts

    - by user1322092
    I have two projects that are a copy of each other on my Mac with MAMP. They both have SSL pages. However, whenever I hit the a secured SSL page of project 2, the base_url or host changes to project1 instead of remaining project2. I know this is an issue with the vhosts, because if I switch the order of the entries, the reverse happens. Here's my config files: /Applications/MAMP/conf/extra/httpd-ssl.conf <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj1" ServerName proj1.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj2" ServerName proj2.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> -------------------- cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 proj1.localhost 127.0.0.1 proj2.localhost

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • How to stop SophosAV from scanning directories under source control

    - by user26453
    From googling it seems its well known that SophosAV as well as other AV programs have issues with how they interact and can inhibit source control utilities like TortoiseHG or TortoiseSVN. One solution is to exclude directories under source control from on-access scanning as detailed here on Sophos's support site. There is a corollary article that mentions some issues related to this, namely need to place multiple entries for exclusions based on the possibility of the location being accessed through the short vs. long name (e.g., Progra~1 vs. "Program Files"). One other twist is I am using a junction to relocate my user directory, C:\Users\Username, to a second hard drive, E:. Since I am not sure how this interacts I have included the source control directory as they are nested in both locations. As a result, I have included the two exclusions for the on-access scanning exclusions (and to be on the safe side on-demand exclusions as well, although this should only come into play when I select a parent directory of the exclusion to be scanned on-demand, but still). You'll notice I have no need to add extra exclusions for those locations based on short vs. long name distinctions. The two exclusion I have then, for both on-access and on-demand scanning exclusions are: C:\Users\Username\source-control-directory E:\source-control-directory However, this does not seem to work as TortoiseHG still lags terribly in response to any request as AV software starts scanning when the directory is accessed via TortoiseHG. I can verify without a doubt that Sophos is causing the problems: I can completely disable on-access scanning. Once this is done TortoiseHG responds very fast to all operations. I cannot leave this disabled obviously, but since the exclusion don't seem to be working, what next?

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • Check for updates of a specific Debian package list

    - by Erwan Queffélec
    The setup I run a Debian Squeeze host that I use to build a multilanguage project (python, java, php...) and generate custom packages (debian and RPM) automatically (through jenkins) The problem The target distributions of those Debian packages are Etch, Lenny and Squeeze. But our project has some native dependencies that are available only through the DebianRelease + 1 repository (i.e Lenny + 1 == Squeeze, Squeeze + 1 == Wheezy). We for example, need the jetty packages from Squeeze in Lenny, and the cyrus-imapd-2.4 packages from Wheezy in Squeeze. Some additional info : Some package we can simply 'backport by hand' by mirroring the DebianRelease + 1 packages to our own repositories. For instance, the jetty package from Squeeze will run fine on Lenny because it doesn't need an s**tload of additional dependencies However we do need to rebuild some packages. For instance, cyrus-imapd-2.4 from Wheezy has a lot of unsatisfied dependencies on Squeeze. So we need to rebuild it in Squeeze and then upload it to our repo. The question I need to have a simple way of knowing if they are any updates on those extra packages (both "normal" and "security" updates). I could write a simple script that runs weekly, get some parameters from a file, and generate an update report. Let's say the file looks like this: jetty:squeeze cyrus-imapd-2.4:wheezy The script should run as normal user not to mess up the system apt configuration and issue the appropriate commands to generate that report. Does Debian has some built-in apt-* commands/options dedicated to that kind of problem I could use to write this script ? If not, can someone think of another clean solution to achieve what I need ?

    Read the article

  • Redhat 5.5: Multi-thread process only uses 1 CPU of the available 8

    - by Tonny
    Weird situation: Redhat Enterprise 5.5 (stock install, no updates, x64) on a HP z800 workstation. (Dual Xeon 2,2 Ghz. 8 cores, 16 if you count Hyper-threading. RH sees 16 cores.) We have an application that can utilize 1, 2 or 4 threads for heavy calculations. Somehow all these threads run on the same core at 100% load (the other 15 cores are nearly idle) so there is absolutely no benefit from the extra threads. In fact there is a slight slowdown as the threads get in each others way on the single core. How do I get them to run on separate cores (if possible)? Application is 64 bit. Can't change anything about the software except changing the threads setting. Is there some obscure Linux setting I can try to change? (I'm a True64 and Aix guy. I use Linux, but have no in depth knowledge of the process scheduling on Linux.)

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Is it necessary to burn-in RAM for server-class systems?

    - by ewwhite
    When using server-class systems with ECC RAM, is it necessary or even useful to burn-in the memory DIMMs prior to deployment? I've encountered an environment where all server RAM is placed through a lengthy multi-day burn-in/stress-tesing process. This has delayed system deployments on occasion and adds an extra step to the hardware lead-time. The server hardware is primarily Supermicro, so the RAM is sourced from a variety of vendors; not directly from the manufacturer like a Dell Poweredge or HP ProLiant. Is this process useful? In my past experience, I simply used vendor RAM out of the box. Isn't that what the POST memory tests are for? I've encountered and responded to ECC errors long before a DIMM actually failed. The ECC thresholds were usually the trigger for warranty placement. Do you burn your RAM in? If so, what method do you use to perform the tests? Has the burn-in process resulted in any additional platform stability? Has it identified any pre-deployment problems?

    Read the article

  • How to avoid intrusion detection/anti spoofing issue on a sonicwall TZ series FW

    - by Ian
    We have a sonicwall tz series FW with two internet service providers connected. One of the providers has a wireless service which works a bit like an ethernet switch in that we have an ip with a /24 subnet and the gateway is .1. All other clients on the same subnet (say 195.222.99.0) have the same .1 gateway - this is important, read on. Some of our clients are also on the same subnet. Our config: X0 : Lan X1 : 89.90.91.92 X2 : 195.222.99.252/24 (GW 195.222.99.1) X1 and X2 are not connected, other than both being connected to the public Internet. Client config: X1 : 195.222.99.123/24 (GW 195.222.99.1) What fails, what works: Traffic 195.222.99.123 (client) <- 89.90.91.92 (X1) : Spoof alert Traffic 195.222.99.123 (client) <- 195.222.99.252 (X1) : OK - no spoof alert I have several clients with IPs in the 195.222.99.0 range and all provoke identical alerts. This is the alert I see on the FW: Alert Intrusion Prevention IP spoof dropped 195.222.99.252, 21475, X1 89.90.91.92, 80, X1 MAC address: 00:12:ef:41:75:88 Anti-spoofing is switched off on my FW (network-mac-ip-anti-spoofing - config for each interface) for all ports I can provoke the alerts by telneting to a port on X1 from the clients. You can't argue with the logic - this is suspicious traffic. X1 is receiving traffic with a source IP which corresponds to X2s subnet. Anyone know how can I tell the FW that packets with a src subnet of 195.222.99.0 can legitimately appear on X1? I know whats going wrong, I've seen the same thing before, but with higher end FWs you can avoid this with a few extra rules. I can't see how to do this here. And before you ask why we're using this service provider - they give us 3ms (yep 3ms, thats not an error) delay between routers.

    Read the article

  • Openconnect for Cisco VPN doesn't recognize private key file - asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag

    - by Alexander Skwar
    I'm trying to use my Synology DS212 NAS box also act as VPN gateway to my companies VPN. Sadly, they only use Cisco ASA and to complicate stuff even further, we've got to use personal certificates (which is of course more secure, but more complicate to get going…). So I compiled OpenConnect v4.06 from http://www.infradead.org/openconnect/. As a very basic test, I tried to build a connection by manually invoking openconnect, passing along the key and cert files, like so: /lib/ld-linux.so.3 --library-path /opt/lib \ /opt/openconnect/sbin/openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 It fails :( Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Using client certificate '/[email protected]/OU=Company VPN' 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Loading private key failed (see above errors) Loading certificate failed. Aborting. Failed to open HTTPS connection to <ip> Failed to obtain WebVPN cookie When I run the same command with the same cert/key files on a Ubuntu 12.04 box, it works: openconnect \ --certificate=$VPN_CFG/alexander.crt \ --sslkey=$VPN_CFG/alexander.key \ --cafile=$VPN_CFG/Company_VPN_CA.crt \ --user=alexander --verbose <ip>:443 Attempting to connect to <ip>:443 Using certificate file $VPN_CFG/alexander.crt Extra cert from cafile: '/CN=Company AG VPN CA/O=Company AG/L=Zurich/ST=ZH/C=CH' SSL negotiation with <ip> Server certificate verify failed: self signed certificate Certificate from VPN server "<ip>" failed verification. Reason: self signed certificate Enter 'yes' to accept, 'no' to abort; anything else to view: yes Connected to HTTPS on <ip> GET https://<ip>/ […] Well… The error on the NAS is this: 5919:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1315: Any ideas, what's causing this? On Syno, I use OpenConnect 4.06. On Ubuntu, I just compiled and installed to a custom location OpenConnect 4.06 as well. Thanks, Alexander

    Read the article

  • How to list rpm packages/subpackages sorted by total size

    - by smci
    Looking for an easy way to postprocess rpm -q output so it reports the total size of all subpackages matching a regexp, e.g. see the aspell* example below. (Short of scripting it with Python/PERL/awk, which is the next step) (Motivation: I'm trying to remove a few Gb of unnecessary packages from a CentOS install, so I'm trying to track down things that are a) large b) unnecessary and c) not dependencies of anything useful like gnome. Ultimately I want to pipe the ouput through sort -n to what the space hogs are, before doing rpm -e) My reporting command looks like [1]: cat unwanted | xargs rpm -q --qf '%9.{size} %{name}\n' > unwanted.size and here's just one example where I'd like to see rpm's total for all aspell* subpackages: root# rpm -q --qf '%9.{size} %{name}\n' `rpm -qa | grep aspell` 1040974 aspell 16417158 aspell-es 4862676 aspell-sv 4334067 aspell-en 23329116 aspell-fr 13075210 aspell-de 39342410 aspell-it 8655094 aspell-ca 62267635 aspell-cs 16714477 aspell-da 17579484 aspell-el 10625591 aspell-no 60719347 aspell-pl 12907088 aspell-pt 8007946 aspell-nl 9425163 aspell-cy Three extra nice-to-have things: list the dependencies/depending packages of each group (so I can figure out the uninstall order) Also, if you could group them by package group, that would be totally neat. Human-readable size units like 'M'/'G' (like ls -h does). Can be done with regexp and rounding on the size field. Footnote: I'm surprised up2date and yum don't add this sort of intelligence. Ideally you would want to see a tree of group-package-subpackage, with rolled-up sizes. Footnote 2: I see yum erase aspell* does actually produce this summary - but not in a query command. [1] where unwanted.txt is a textfile of unnecessary packages obtained by diffing the output of: yum list installed | sed -e 's/\..*//g' > installed.txt diff --suppress-common-lines centos4_minimal.txt installed.txt | grep '>' and centos4_minimal.txt came from the Google doc given by that helpful blogger.

    Read the article

  • OCZ Vertex 3 SSD boot failure

    - by Col Zero
    Ok, so I just purchased and received a 120GB OCZ Vertex 3. I had Windows 7 on an .ISO file on my computer. I formatted a USB stick and configured it to be read as a CD so that I could install windows onto my SSD from it. I started my comp, had the boot priority set to my USB, starting installing windows 7 to my SSD. And out of no where (I wasn't watching) my computer restarted and it brought me back to the beginning of the Windows 7 set-up. So I turned my computer off and booted it up from the SSD to see if it had installed onto the SSD. The first 2 attempts I had a disk boot failure. So I plugged my hard drive back in, started my computer, turned it off, plugged the SSD back in (literally) and it booted up fine/ Finalized windows got internet set up, and Windows had updates that required a restart. So I restarted and had another disk boot failure. Now I have a disk boot failure every time I try to start my computer up through my SSD. Extra Info: My SSD has never been able to be detected in my BIOS unless my Hard Drive was unplugged (eve then my BIOS didn't always detect it). MY SSD wasn't detected in my BIOS the first and only time it successfully booted up. My SSD literally boots up successfully randomly (only once unfortunately) and is detected in my BIOS randomly. I've tried switching cords etc and nothing has worked. I just want to get this damn thing running so I can see whats its like. I finally found a way to get the OS on this sucker and now it won't even boot up. Any help appreciated

    Read the article

  • Mounting 2.5" SSD in Antec Atlas 550's 3.5" bay

    - by cecilkorik
    Just got some new SSDs to add to my servers, unfortunately there doesn't appear to be anywhere to actually mount them. The cases are Antec Atlas 550 mid-tower server cases. The problem is that the 3.5" bays are intended for 3.5" hard drives (obviously) not SSDs, so they are mounted from the bottom with rubber grommets to reduce vibration. There are NO side-holes in any of the 3.5" bays, all mounting has to be done from the bottom of the drive, through the thick rubber grommets, which requires special extra-long screws with large heads. Those screws will not work with the SSD, the thread is too coarse, 2.5" drives apparently use M3 screws with a finer pitch. The case's 3.5" bays have holes aligned for 2.5" drives and by moving the grommets into the appropriate holes they line up with the SSD. But that's as far as I can get. I don't have and cannot find any screws that use the finer M3 pitch that are long enough and have the wide head. I can't remove the grommets because the screw holes are much too wide without them, the entire head of the screw fits through. And again, there are no side-holes, so I can't use the mounting bracket that came with the drive nor any other 2.5"-3.5" bracket I've found. Here is a pic of what I am dealing with (note that the SSD is just resting on the case, it's not currently screwed in if that's not clear) Any ideas for safely mounting these little guys without resorting to duct tape or superglue would be very appreciated. If anyone knows where I could buy the appropriate type of M3 threaded screw or has any other ideas please help me out here.

    Read the article

  • Migrate installation from one HDD to another?

    - by dougoftheabaci
    I have a server with a 64 GB SSD inside. It runs just fine, but occasionally there's a hiccup that causes it to nearly fill up. When that happens my server starts to lockup and generally misbehave. I'm looking to buy a bigger SSD (either 128 GB or 256 GB) but I'm a bit unsure of how best to make the transition. For a start, I don't have an external monitor. If I need one I'll have to borrow it from work. Most of the time I just SSH into the server from my iMac. The only solution I can think of would be to buy two FW800 2.5" cases, boot from the 64 GB SSD and clone it to the 128 GB SSD. Seem a bit excessive but it might be my best option. I do have more than one SATA port on my server, but they're all currently being use for storage drives. They don't mount by default, so I could unplug them and just have the two SSDs and do the whole thing via SSH. This is another option I'm considering. My main concern with either is how best to make sure everything goes across. I want a carbon copy of the first one onto the second. This is especially important because I have a ZFS volume (my storage) and I'm a bit unfamiliar with how to move everything across. I could just start fresh and reinstall everything on the SSD, but that seems like extra trouble I don't need. So any advice on how best to achieve my goals would be appreciated. Thanks! Server is running Ubuntu Server 12.04. The iMac has 10.8.1.

    Read the article

  • Inter-vlan routing issues

    - by DKNUCKLES
    I've been brought in to help administer a network and I've run into an issue - I'm not sure why this one is beyond me, however I figure an extra set of eyes on the problem may help resolve the issue. I have an HP MSM720 controller and at the time I'm trying to set up a basic hotspot set up with access points. For the time being I'm just looking to have people authenticate with a PSK and access the internet and other resources (namely printers) on other vlans. The user authenticates and the DHCP server on the controller gives them a 192.168.1.0/24 address. They are able to successfully browse the internet and ping machines on other networks, however they are unable to print to network printers that sit on the same LANs as the very computers that wireless clients can ping. The (extremely simplified) topology is as follows Computers on the wireless 192.168.1.1 network are able to ping computers on the 192.168.0.0 network, however cannot ping or print to the printers on the same network. I'm baffled and I have no idea why this is the case. Can anyone shed some light on this for me? Can someone spot the error of my configuration? EDIT : It should be noted that for whatever reason other computers on the 10.0.100.0/24 network cannot even ping the gateway of the Wireless Access network (192.168.1.1) - I'm not sure if this is relevant. These are the VLANS listed on the controller.

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >