Search Results

Search found 30252 results on 1211 pages for 'network programming'.

Page 1109/1211 | < Previous Page | 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116  | Next Page >

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • which virtualization technology is right for me?

    - by Chris
    I need a little help with this getting this sorted out. I want to setup a linux virtual server that I can use to run both sever and desktop systems. I want a linux system that is minimalist in nature as all the main os will be doing is acting as a hypervisor. The system I'm trying to setup will be running a file server, windows 7, ubuntu 10.04, windows xp and a firewall/gateway security system. All the client OS'es accessing and storing files on the file server. Also all network traffic will be routed through the gateway guest os. The file sever will need direct disk access while the other guests can run one disk images. All of this will be running on the same computer so I wont be romoting in to access the guests OS'es. Also if possible I would like to be able to use my triple head setup in the guest OS'es. I've looked at Xen, kvm and virtualbox but I don't know which is the best for me. I'm really debating between kvm and virtual box as kvm seem to support direct hardware access.

    Read the article

  • DNS resolve .com domain on local domain

    - by Joost Verdaasdonk
    I'm building a local 2008 R2 domain as a test case to be able to write a roadmap for the real new domain that needs to be created soon. What I would like to know if I'm able to make a record in DNS that will point the domain name: www.example.com and example.com to one of the servers in my network. I tried creating an a-record for it but that doesn't work. To be honest I'm not even sure if this is possible? So can I do this? That way I would be able to fully test all our services (and webb app) offline before I build the real domain and switch the DNS records at the provider. Some advice if possible and where to start is appreciated. The solution (Thanks Brent): Create new Forward lookup zone pointing to example.com Create empty A record pointing to IP of the webserver you are targeting If www is needed create A record with Name: www and IP of your webserver sub domains repeat the process but then with names for example: sub or www.sub (and ip your webserver) Be aware of the DNS Cache while you are in this process. Things can take time or do the following: Right click the server and choose clear cache in CMD: ipconfig /flushdns (to flush the client cache)

    Read the article

  • IP to IP forwarding with iptables [centos]

    - by FunkyChicken
    I have 2 servers. Server 1 with ip 1.1.1.1 and server 2 with ip 2.2.2.2 My domain example.com points to 1.1.1.1 at the moment, but very soon I'm going to switch to ip 2.2.2.2. I have already setup a low TTL for domain example.com, but some people will still hit the old ip a after I change the ip address of the domain. Now both machines run centos 5.8 with iptables and nginx as a webserver. I want to forward all traffic that still hits server 1.1.1.1 to 2.2.2.2 so there won't be any downtime. Now I found this tutorial: http://www.debuntu.org/how-to-redirecting-network-traffic-a-new-ip-using-iptables but I cannot seem to get it working. I have enabled ip forwarding: echo "1" > /proc/sys/net/ipv4/ip_forward After that I ran these 2 commands: /sbin/iptables -t nat -A PREROUTING -s 1.1.1.1 -p tcp --dport 80 -j DNAT --to-destination 2.2.2.2:80 /sbin/iptables -t nat -A POSTROUTING -j MASQUERADE But when I load http://1.1.1.1 in my browser, I still get the pages hosted on 1.1.1.1 and not the content from 2.2.2.2. What am I doing wrong?

    Read the article

  • How to define nodes from a Hiera file in Puppet?

    - by Pigueiras
    I am using puppet and the puppet network device management module and I am trying to build my custom type. In the built-in type for the routers configuration, you can specify a list of nodes and then the configuration inside that node: node "c2950.domain.com" { Interface { duplex => auto, speed => auto } interface { "FastEthernet 0/1": description => "--> to end-user workstation", mode => access, native_vlan => 1000 # [...] More configuration } What I am trying to do, is to move the manifest declaration of the nodes and the configuration of my custom type to a Hiera file like this one: nodes: - node1 - node2 config_device: node1: custom_parameter: "whatever1" node2: custom_parameter: "whatever2" And then in the manifest iterate over the hiera file creating the nodes with the configuration of each node with something like (I am taking as reference this question in serverfault): class my_class { $nodes = hiera_array('nodes') define hash_extract() { $conf_hash = hiera_hash("config_device") $custom_paramter = $conf_hash[$name] ## TRICK lies in $name variable node $name { my_custom_device { $name: custom_parameter => $device_conf['custom_parameter'] } } } hash_extract{$pdu_names: } } } But for this solution I have two problems, I can not define a node inside a define and I can not parameterize a node name. So, is there any way to declare nodes from a Hiera file with their configuration inside?

    Read the article

  • Macvlan based interface pings from host but not from namespace

    - by jtlebi
    My setup: Private network vboxnet1 10.0.7.0/24 1 Host, ubuntu desktop 1 VM, ubuntu server (VirtualBox) Adressing layout: HOST: 10.0.7.1 VM: 10.0.7.101 VM MAC NAMESPACE: 10.0.7.102 On the VM, I ran the following commands: ip netns add mac # create a new nmespace ip link add link eth0 mac0 type macvlan # create a new macvlan interface ip link set mac0 netns mac On the mac namespace, inside the VM: ip link set lo up ip link set mac up ip addr add 10.0.7.102/24 dev mac0 So that we basically end up with: (Like Inception ?) +------------------------+ | Host: 10.0.7.1 | | | | +--------------------+ | | | VM: 10.0.7.101 | | | | | | | | +----------------+ | | | | | NS: 10.0.7.102 | | | | | | | | | | | +----------------+ | | | +--------------------+ | +------------------------+ What works: Ping between Host and VM Ping between NS and NS dhclient from NS What does not work: ping between NS and VM ping between NS and Host Where I started to go nuts: tcpdump on host (the real machine) actually shows ARP request AND replies tcpdump on NS shows ARP requests sent to the host tcpdump on VM makes the whole mess work (!) -- ping starts to get answers when tcpdump is started on the VM ?!? So, I bet you were eager for it, my question is: how to I make it work ? I suspect something's wrong with ARP on the macvlan inside the NS but can't figure out what exactly... Btw, I did the same expérimentations with the mac0 interface directly on the VM (no namespace) and it worked flawlessly.

    Read the article

  • Can't access site internally, but DNS works

    - by BloodyIron
    1) I have apache2 running a vhost for a website. 2) This apache2 instance is already successfuly setup for other websites on it to be accessible internally and externally. 3) I am using an internal bind9 server to resolve the new website's domain internally to the private IP. This bind9 server is not public facing, nor is it the master server on the internet. 4) The DNS internally resolves to the right IP. 5) Firefox reports "server not found". 6) I have copied the config almost identically to other configs that are known to work (adjusting for proper paths of course). In turn I have reloaded and restarted apache2 repeatedly. 7) I have an entry to forward .org .info .net alternative TLDs to .com in the vhost config for this domain, and my browser goes from .org to .com despite note #5. 8) /var/log/apache2/access.log shows when someone externally tries to access the site, but no activity is observed when someone tries to access internally. Changing the log level does not appear to improve the situation. 9) I am out of ideas, nothing appears to be wrong. Please help? To be explicit. Why is this new site unreachable internally? I would like to clarify on something, even though I have already outlined this. YES I know this system is in a private network. NO it is not going through a router. YES I am using an internal DNS server (bind9) to resolve, and YES it does resolve to the proper internal IP. YES other websites on the same server setup in the same way with internal resolution work right now and have done for a while. Everything for this domain is setup the same as the other working domains as far as I can tell. The other working domains are internally AND externally accessible. This domain I am working with is only currently externally accessible. When I go to it internally firefox tells me "Server not found".

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • Gigabyte H55N-USB3: No video on HDMI

    - by newt
    I built a new PC with a Gigabyte H55N-USB3 / Intel Core i5 650. With a monitor plugged in the DVI port, everything works fine. I installed Windows 7 32-bit and enabled remote desktop connection. After that, I unplugged the monitor, plugged it into network and installed everything else (drivers, programs, etc) via RDP. However, when I try to use the HDMI port on my TV nothing appears. Neither during the boot, neither after Windows starts. The TV says there's "no signal" (if I remove the cable the message changes to "check cable"). The cable is new, and it is working fine with my home theater on same TV (by the way, it is the cable which came bundled with the home theater). Video driver is the latest from Intel site. Anyway, this shouldn't be the problem since there is no image during the boot. Any ideas or tips would be welcome. I'm googling around but found nothing useful, yet.

    Read the article

  • Write once, read many (WORM) using Linux file system

    - by phil_ayres
    I have a requirement to write files to a Linux file system that can not be subsequently overwritten, appended to, updated in any way, or deleted. Not by a sudo-er, root, or anybody. I am attempting to meet the requirements of the financial services regulations for recordkeeping, FINRA 17A-4, which basically requires that electronic documents are written to WORM (write once, read many) devices. I would very much like to avoid having to use DVDs or expensive EMC Centera devices. Is there a Linux file system, or can SELinux support the requirement for files to be made complete immutable immediately (or at least soon) after write? Or is anybody aware of a way I could enforce this on an existing file system using Linux permissions, etc? I understand that I can set readonly permissions, and the immutable attribute. But of course I expect that a root user would be able to unset those. I considered storing data to small volumes that are unmounted and then remounted read-only, but then I think that root could still unmount and remount as writable again. I'm looking for any smart ideas, and worst case scenario I'm willing to do a little coding to 'enhance' an existing file system to provide this. Assuming there is a file system that is a good starting point. And put in place a carefully configured Linux server to act as this type of network storage device, doing nothing else. After all of that, encryption on the files would be useful too!

    Read the article

  • How do I cancel windows server 2003 repair install?

    - by Kilgore2k
    System: Windows 2003 Server Enterprise Scenario: NTDS db is corrupt and all attempts to fix with esentutl fail. Ran chkdsk which seemed to repair disk error and give access to the ntds.dit file but still esentutl fails. (Attached the drive to a different server to run the esentutl) Error: Access to source database '[path to copy of]/ntds.dit' failed with Jet error -1022. Operation terminated with error -1022 (JET_errDiskIO, Disk IO error) after 0.170 seconds. This error occurs on any disk I cpoy the files to including original location in C:\WINDOWS\NTDS\ Now enter the "Stupid!" and "what was I thinking!?" part (must be the late hour...) Stupid: No updated backup - after using a backup I get a network password error in the lsass error. what was I thinking!?: Started the install repair from the original CD but the install fails since the AD fails to start. Now I cant boot into any mode (safe mode, AD restore etc) nor complete the repair install. I would really like to avoid a fresh install since I have the Exchange server on this DC and would rather migrate to a new server than have to start from scratch. Thanks!

    Read the article

  • Suggestions for Backup solution

    - by jiewmeng
    i am considering between windows home server simple nas extra HDD's in desktop btw, i will be the main user i am looking to fulfil the following needs: reliability (i am think RAID 1 or 5) not so prone to virus/malware infections (will using a separate NAS or home server help? say windows home server is still a windows pc except separated by network?) power efficiency (eg. spin down when not in use) download (eg. i may want to dl big files/torrents overnight and i may not want to use a full powered PC for it? does a full pc vs NAS provide significant power usage to justify cost of new system esp. since i am only user?) performance (i guess i like to write/access my files fast, on 2nd thought, maybe for backup i can forgo this? maybe for a WD Green HDD? but how much slower will it be? plus since i am the only user, i think the whole HDD will be mine?)

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • Synchronize Dreamweaver over an SSH tunnel using an SFTP connection

    - by Aeo
    Maybe... Just maybe... I'm asking too much here. Maybe I'm even barking up the wrong tree. I'm looking to essentially have Dreamweaver establish an SSH tunnel to one machine, and then use that connection to synchronize a site that is on another machine entirely. Now for some details: We've got two connections here at work. We've got our office connection for day to day business, and then we've got some fancy connection hosting our web servers upstairs. For the most part they've been mutually exclusive until recently. We had been establishing an SFTP connection to synchronize our web sites by going out over the office connection to the web and coming back in over the fancy connection to our servers upstairs. Recently -ish, we established a LAN connection to one of our servers that makes a pleasant change in VNC connection quality. Thanks to Vinagre, this makes it really easy to connect to any of our servers over this LAN connection via SSH tunnel for VNC. However, in spite of that new addition of a LAN connection, we still synchronize over the 'net. Out the office connection and in on the fancy one upstairs. I'm looking to change this. I'd like to get Dreamweaver to first tunnel over our LAN connection to the servers, and then go from there to whatever connection it needs to. Am I asking too much? The current set up: Dreamweaver is installed on Windows XP which is running within VirtualBox on top of Ubuntu 10.10. The network connection for VirtualBox is currently made in NAT mode, but could easily be switched to a Bridged Connection should it need be. The LAN connection is to 1 of 5 servers running CentOS 5.

    Read the article

  • SSH Connection Error : No route to host

    - by dewbot
    There are three machines in this scenario: Desktop A : [email protected] Laptop A : [email protected] Machine B : [email protected] All the machines have Ubuntu 11.04 (Desktop A is a 64bit one) and have both openssh-server and openssh-client. Now when I try to connect Desktop A to Laptop A or vice-versa by ssh [email protected] I get an error as port 22: No route to host in both the cases. I own both the machines, now if I try same commands from my friend's machine, i.e. via Desktop B, I can access both my Laptop and Desktop. But if I try to access Desktop B from my Laptop or by Desktop I get port 22: Connection timed out I even tried changing ssh port no. in ssh_config file but no success. Note: that 'Laptop A' uses WiFi connection while 'Machine A' uses Ethernet Connection and 'Machine B' is on an entirely different network. Laptop A && Desktop A - Router/Nano_Rcvr provided to me by ISP. So to one Router two Machines are connected and can be accessed at the same time. here is my ifconfig output for both the machines :- Laptop wlan0 Link encap:Ethernet HWaddr X:X:X:X:00:bc inet addr:1.23.73.111 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::219:e3ff:fe04:bc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:108409 errors:0 dropped:0 overruns:0 frame:0 TX packets:82523 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:44974080 (44.9 MB) TX bytes:22973031 (22.9 MB) Desktop eth0 Link encap:Ethernet HWaddr X:X:X:X:c5:78 inet addr:1.23.68.209 Bcast:1.23.95.255 Mask:255.255.224.0 inet6 addr: fe80::227:eff:fe04:c578/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10380 errors:0 dropped:0 overruns:0 frame:0 TX packets:4509 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1790366 (1.7 MB) TX bytes:852877 (852.8 KB) Interrupt:43 Base address:0x2000

    Read the article

  • How can I create a simple Exchange 2010 backup solution?

    - by bduncanj
    I'm sure this question's been asked a dozen times in one form or another, however after much searching, there doesn't appear to be an obvious simple recovery solution for a single Exchange box. We're using Exchange 2010 on a single server, the server hosts the AD and nothing else on the network uses the AD. The intent is to run this server as you would an externally hosted Exchange server - access only via HTTP (RPC mode or OWA) - all other ports blocked. I've a daily backup running, using Windows Server 2008 volume shadow service to backup the Exchange data to an external hard disk. My question is, how do I perform a bare metal recovery of this server? 1) Do I need to be explicitly including the active directory information in this nightly backup, or will it be there by virtue of the fact that this system is the primary AD server and the Windows backup service knows this? 2) I understand I can re-install Server 2008 onto my new hardware (in the case of hardware failure) and then run Exchange 2010 setup.exe with a /recover argument, referencing the backup volume. 3) It is acceptable to have some downtime during this recovery process. But is there anything else I should be aware of? Thanks! Duncan

    Read the article

  • Outlook Calendar Attachments to have limited access to just Required attendees

    - by Jason Pearce
    The management team at my company often attaches documents (Word, Excel, PDFs) to their Outlook Calendar meeting requests. The meeting requests are sent to the managers, but also to their assistants. The desire is to have everyone be able to view the full meeting request and its content, but limit the ability to open the attachments to just the managers. Is there a way in Outlook 2003 and/or 2007 to limit access to attachments that accompany meeting requests? Ideally, can access to the attachments be controlled by the "Select Attendees and Resources" window when selecting individuals from the Global Address List. Can those in the Required field have access to the attachments while those in the Optional or Resources fields not have access? My suggestion was to simply place all meeting attachments in a shared network folder that has read/write access limited to managers. They would then just place fully qualified links to those files in the body of the Meeting Request. While everyone would receive and see the links, only a few would have access. This, however, wasn't easy enough for them, so I'm looking for some other ideas.

    Read the article

  • G4 server running slow

    - by Abby Kach
    I have HP proliant ML 350 servers. We have 8 remote locations where users connect and log on to our server through DYNDNS to access our company ERP's to conduct day to day work. The base of our company ERP's is oracle for which we have a separate server.Now the problem is day by day the load on the server is increasing and the speed is getting slower and slower and users are facing a lot of issues . so I are planning to implement Sonic wall VPN. I conducted a demo of sonic wall but it was slower than the current speed of dyndns. the configuration of my server is as follows :- Linux HP ProLiant 370 Intel Xenon 3.20 GHZ 150 GB (72 * 2) 3 GB Suse Omega HP ProLiant 370 Intel Xenon 3.20 GHZ 300GB (72.8 * 4) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Storage Box HP Storage Works 1400 Intel Xenon 2.00 GHZ 4 TB(1 TB * 4) Raid 5 2 GB Windows Server 2K8 Enterprise Edition Domain & Terminal HP ProLiant 350 Intel Xenon 3.20 GHZ 250 GB(72.8 * 3) Raid 5 4 GB Windows Server 2K3 Enterprise Edition Can some one help me as to how can i speed up my network at remote locations and reduce the problems of speed etc..

    Read the article

  • What's the best way to telnet from a remote Windows PC without using RDP?

    - by Rob D.
    Three Networks: 10.1.1.0 - Mine 172.1.1.0 - My Branch Office 172.2.2.0 - My Branch Office's VOIP VLAN. My PC is on 10.1.1.0. I need to telnet into a Cisco router on 172.2.2.0. The 10.1.1.0 network has no routes to 172.2.2.0, but a VPN connects 10.1.1.0 to 172.1.1.0. Traffic on 172.1.1.0 can route to 172.2.2.0. All PCs on 172.1.1.0 are running Windows XP. Without disrupting anyone using those PCs, I want to open a telnet session from one of those PCs to the router on 172.2.2.0. I've tried the following: psexec.exe \\branchpc telnet 172.2.2.1 psexec.exe \\branchpc cmd.exe telnet 172.2.2.1 psexec.exe \\branchpc -c plink -telnet 172.2.2.1 Methods 1 and 2 both failed because telnet.exe is not usable over psexec. Method 3 actually succeeded in creating the connection, but I cannot login because the session registers my carriage return twice. My password is always blank because at the "Username:" prompt I'm effectively typing: Routeruser[ENTER][ENTER] It's probably time to deploy WinRM... Does anyone know of any other alternatives? Does anyone know how I can fix plink.exe so it only receives one carriage return when I use it over psexec?

    Read the article

  • Two internet connections at once in Windows 7

    - by webmasters
    I have a 3G wireless modem and I have a LAN - Right now they are both connected. I need a way to choose which applications will use the 3G connection and which applications will use the LAN. My Operating System is windows 7. How can I do this? Any ideas? Here is a route print: - the 3G modem's IP is 10.81.132.96 Lets say, for example, map google.com to using the 3G internet connection. IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.2.1 192.168.2.102 20 0.0.0.0 0.0.0.0 10.81.132.97 10.81.132.111 286 10.81.132.96 255.255.255.224 On-link 10.81.132.111 286 10.81.132.111 255.255.255.255 On-link 10.81.132.111 286 10.81.132.127 255.255.255.255 On-link 10.81.132.111 286 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.2.0 255.255.255.0 On-link 192.168.2.102 276 192.168.2.102 255.255.255.255 On-link 192.168.2.102 276 192.168.2.255 255.255.255.255 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.2.102 276 224.0.0.0 240.0.0.0 On-link 10.81.132.111 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.2.102 276 255.255.255.255 255.255.255.255 On-link 10.81.132.111 286 ===========================================================================

    Read the article

  • torrent downloads not showing on Squid log

    - by noobroot
    hello, i have just a few months working as sysadmin, hence i still have lots to learn, first thing id like to do is as follows: We have an OpenBSD 4.5 box acting like firewall,dns,cache etc, the box has 2 network cards, one conected directly to the internet and the other to our switch, i used to work with sarg for the log analysis but then changed to the much faster free-sa. I use a daily free-sa report to check the bandwidth usage and report our top 5 bandwidth consumers (3 days a week being #1 and you will be buying the pizzas :D, we are a small company ~20 so we are very familiar). this was working really good until recently, one of us required to download some stuff via torrent (~3GB) and since the pizza rule is active for non-work related downloads, he told me (verified) that his download was indeed work related so i would dismiss that 3GB off his quota, but to my surprise the log didnt showed that 3GB, since his ip consumption was only around 290MB. More recently, since the FIFA world cup started, we know that some of the employees are watching the match's streaming, we know it and we dont care about it since, like already stated, we are a small company so we dont have restrictive policies, we all can chat, watch youtube, download anything we want BUT we are only allowed 300MB a day otherwise you'll get in the top5-pizza-board, anyway, that streaming consumption is also not showing in the free-sa reports. So my question is, why is these data being excluded from the reports? im thinking that the free-sa reports list only certain types of things but im also thinking if are the squid logs the ones that are not erm... logging these conections. Any help, guide, advice or clarification is appreciated.

    Read the article

  • Apache suddenly very slow on http and faster on https

    - by hsnm
    Background: I have Apache 2 running on ubuntu. There is a low usage on it and mostly being accessed for a web service URL from mobile apps. It was working fine until I installed SSL certificates. I now have both http and https. When I access the server using https, I get a fairly quick response (but probably not as fast as before). When I use http, it's so slow. What I tried: From this post: I curl localhost from the host and it takes some time, meaning there is no routing issue. The server runs on Amazon EC2 instance and is managed by me only. Also: I see that Apache once running, creates the maximum number of processes it is allowed to, which was not the case before. I lowered the MaxClients to 20 and I think I'm getting faster responses but it still takes over a minute and I always have MaxClients Apache processes. dmesg returns many [ 1953.655703] TCP: Possible SYN flooding on port 80. Sending cookies. When I netstat I get many entries with SYN_RECV. Possibly a DDoS attack? From EC2's monitoring diagrams I see a pattern of high "Maximum Network In (Bytes)" since 2 days ago. By the way the server is still being tested, the actual traffic is very low and not consistent. I tried to go with this solution to limit incoming connections using iptables, still no luck, but I'm trying. Question: What could be the problem? Is this a DDoS attack?

    Read the article

  • Setting up apache vhost for Icinga

    - by DKNUCKLES
    It's been a while since I've worked with Apache so please be kind - I'm also aware of this question but it hasn't been much help to me. I'd like to set up a simple vHost w/ Apache for my Icinga instance. Icinga is up and running and I can access it from x.x.x.x/icinga, however would like to be able to access it externally as well as internally. I have set up the /etc/hosts file and the following is my barebones vhost statement in httpd.conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /usr/share/icinga ServerName icinga.domain.com ErrorLog logs/icinga.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> I also have the following in my .htaccess file <Directory> Allow From All Satisfy Any </Directory> An entry has been made for the instance in the Windows DNS server on my network, however when I try to access the site by URL I am greeted with Internal Server Error. Reviewing the /var/log/icinga.com-error_log I see the following entry. [Thu Dec 13 16:04:39 2012] [alert] [client 10.0.0.1] /usr/share/icinga/.htaccess: <Directory not allowed here Can someone help me spot the error of my ways?

    Read the article

  • Apache Virtual Hosts behind Cisco Router

    - by Theo
    I'm setting up an Apache 2.2 Ubuntu web server for internal services that is also supposed to be accessed from outside our LAN. Our LAN has a single external IP that is the external IP of our RV042 Cisco router. We have set up several A records on our external DNS server that point to this IP. Our internal DNS server resolve the same records to the internal IP of our web server, so computers from inside the network can access them using the same address as if they were outside. We forwarded the router's external 80 port to our web server's 80 port. I have set up one Virtual Host for each domain name in our list, and my httpd.conf is something like this: ServerName web.domain.com NameVirtualHost *:80 <VirtualHost *:80> ServerName alfresco.domain.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://localhost:8080/alfresco ProxyPassReverse /alfresco http://localhost:8080/alfresco ProxyPass /share http://localhost:8080/share ProxyPassReverse /share http://localhost:8080/share </VirtualHost> <VirtualHost *:80> ServerName crm.domain.com DocumentRoot /var/www/sugarcrm </VirtualHost> Now, this works if we are in our LAN. However, if we are outside of our LAN we reach our web server's default page saying: It Works! This is the default web page for this server. But we can't reach the virtual hosts, as if the domain name is not being preserved when the router forward the packets to the web server. Am I doing something wrong? How can I check what is going on? What should be the settings to make this work from outside?

    Read the article

  • Unable to set initcwnd on a Hetzner server

    - by Sergi
    We just ordered a bunch of Hetzner EX40SSD servers with the minimal Debian install image that they provide and everything is just fine except that looking at tcpdumps for fine tuning the network from various locations the initcwnd param seems to be stuck at 6 no matter how we change it. By default Debian 3.2 kernels should have that setting to 10 so it's pretty strange. Is it possible that the NIC driver or a custom setting in the Hetzner Debian image is limiting this param? Even if we set it to 4, like the old kernel default, it doesn't work. Any ideas would be much appreciated! Does anyone know if the NIC drivers provided by default by Debian have some kind on limitation. In a long thread in http://www.webhostingtalk.com/showthread.php?t=1200617&highlight=hetzner they talk about a page http://wiki.hetzner.de/index.php/Installation_des_r8168-Treibers/en where Hetzner states that the included Realtek r8168 driver is not working properly, but nowhere do they say that the initcwnd could be affected. Tomorrow i will try to install a CentOs image and see if Debian is the problem...Last resort would be to install a custom debian image, but that is a pain in the ass! Thanks!

    Read the article

< Previous Page | 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116  | Next Page >