Search Results

Search found 25148 results on 1006 pages for 'distributed source contr'.

Page 377/1006 | < Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >

  • Tunneling over HTTP

    - by Morgan
    Hello, I have a network at work that is locked behind a firewall and Internet connection is available only by using a proxy server. At work, I can connect to databases that are distributed across the network. However, at home, I cannot connect to the proxy server or the databases. How can this be done? I can access my workstation via LogMeIn, so I can install anything on it. I thought of installing some kind of tunneling mechanism in my workstation. Then, at home, I could connect to this mechanism, which would in turn do the required connections. So essentially, what I'd like to do can be represented by the following diagram: Home = Workstation = Database. For example, whenever I connect to, say, 10.140.0.1:1234 at home, this would be redirected to 10.140.0.1:1234 of my Workstation, because 10.140.0.1:1234 is only available through the corporate network. NOTE: I'm using Windows XP.

    Read the article

  • create a bootable usb to automatic repair windows xp system32 files

    - by Edo Post
    Is it possible to create a script/live distro that replaces some system32 files? To explain it a bit more in details: There is a company that has multiple computers (think in 100/1000's) and they all are missing the same system32 files since the company's software removed it. The systems are distributed all over the world and are managed by "normal" people who don't have any knowledge about computers. I want to create a usb stick that i can mail to all those people which contains a script that executes when you boot the usb. this script should replace the missing system32 files without any user input is this possible, and if so how could i manage this?

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Steps to deploy a custom routing protocol

    - by user134589
    I'm a Ph.D Student and I'm researching a Service Centric Networking architecture with resourceallocation on a large scale. What I'm looking to do is expand an existing routing protocol like OSPF with extra fields and some new message types that I need for communication between Nodes. I want to manipulate the cost of a network link and I want paths to be calculated like in OSPF V2/v3, but using the cost that my algorithms have calculated. What I have I have the source code of OSPF from Quagga. I am assuming I can edit this code how I want, including packet structures and creating new types. Yes, I am aware it won't be easy but this is a 6 years research project and I am eager to develop something new, to move forward. What I need I would like to know how I can deploy the edited OSPF source files I have (written in C) on any type of server. I have a large testbed environment available with hundreds of virtual nodes and pretty much any OS out there. So if I want to test my extended protocol, how do I make all the nodes in a network use this to communicate? I do not understand what parts of the kernel I need to edit here. I tried searching for days now and I am unable to find how to deploy a non-existing routing protocol, without the use of an application-level framework. If somebody could push me in the right direction that'd be awesome. note: I need this to be a routingprotocol and not an application, since I want this to work on op of the network layer for performance reasons. Thanks!

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Active Directory problems while trying to perfom compare operation

    - by Alex
    I have CentOs 5.5 with Apache 2.2 and SVN installed. Also I have Windows 2003 R2 with Active Directory. I'm trying to authorize users via AD so each user have access to repo if he is a member of corespondent group in AD. Here is my apache config: LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so LDAPVerifyServerCert off ServerName svn.mydomain.com DocumentRoot /var/www/svn.mydomain.com/htdocs RewriteEngine On [Location /] AuthType basic AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL ldaps://comp1.mydomain.com:636/DC=mydomain,DC=com?sAMAccountName?sub?(objectClass=*) AuthLDAPBindDN [email protected] AuthLDAPBindPassword binduserpassword [/Location] [Location /repos/test] DAV svn SVNPath /var/svn/repos/test AuthName "SVN repository for test" Require ldap-group CN=test,CN=ProjectGroups,DC=mydomain,DC=com [/Location] When I'm using "Require valid-user" everything goes fine, "Require ldap-user" also works. But as soon as I use "Require ldap-group" authorization fails. Trere are no errors in apache logs, but Active Directory shows folowing error: Event Type: Information Event Source: NTDS LDAP Event Category: LDAP Interface Event ID: 1138 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal event: Function ldap_compare entered. Event Type: Error Event Source: NTDS General Event Category: Internal Processing Event ID: 1481 Date: 10/9/2010 Time: 1:28:52 PM User: MYDOMAIN\binduser Computer: COMP1 Description: Internal error: The operation on the object failed. Additional Data Error value: 2 0000208D: NameErr: DSID-031001CD, problem 2001 (NO_OBJECT), data 0, best match of: 'DC=mydomain,DC=com' I'm confused by this problem. What I'm doing wrong?

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • How much power supply do I need for my server, and could a shortage be causing my odd crashing?

    - by dolan
    I have 5 servers, all with similar hardware (i7, four 2tb 7200rpm drives, two 4tb 5400rpm drives, 430 watt power supply), and lately the machines have been freezing up. This has gotten worse in the last day or so, and I can't pinpoint any explanation. One recent change was adding the two 4tb hard drives. The crashes happen most often while running a large Hadoop job, so I was originally thinking the load was causing some issues, but last night one server just froze without any heavy load on the box (or so I think), other than HDFS (Hadoop's distributed file system) was probably rebalancing itself since two of the five nodes were offline. If I plugin a monitor and keyboard to one of these frozen machines, I can't get any response or feedback on the screen. Any ideas on possible points of failure and/or different logs I can look at to investigate? Thanks Edit: The systems are running Ubuntu 10.04 Edit 2: More on hardware: intel core i7-930 bloomfield 2.8ghz processor (quad core) 12gb (6 x 2gb) kingston ddr3 1333 ram antec earthwatts green 430 power supply msi x58m lga 1366 motherboard

    Read the article

  • IIS6 site using integrated authentication (NTLM) fails when accessed with Win7 / IE8

    - by Ciove
    Hi, I'm having pretty similar problems as described in case 139099, but the fix there doesn't seem to work for me. Here's the details: Server: Win2003Srv R2 SP2 (stadalone, not a member of a domain). IIS6, TCP/443 (https). Anonymous access disabled. Integrated Windows authentication enabled. Local useraccouts Each useraccount has own virtual folder with change access and read access to site root. The 'adsutil NTAuthenticationProviders "NTLM"' -thing set to site root and useraccount's virtual folder. Client: Win7 Enterprise Member of a AD-Domain IE8 Allows three login attepts then fails. Using [webservername][username] in the logon window (Windows security) Logon using other browsers (Chrome and Firefox) works OK. The Web services log shows one 401.2 and two 401.1 events. The Security Event log shows two events, first is Fauilure Audit (680), The second event is Fauilure Audit (529) with these details: Logon Failure: Reason: Unknown user name or bad password User Name: [username] Domain: [webservername] Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: [MyWorkstation] Caller User Name: - Caller Domain: - Caller Logon ID: - Caller Process ID: - Transited Services: - Source Network Address: [999.999.999.999] Source Port: 20089 Any ideas appreciated.

    Read the article

  • Testing performance from around the world - how do I get a linux shell easily in multiple countries?

    - by Matthew O'Riordan
    We are building a socket based service where latency is paramount, and as such we have servers distributed into 7 data centres around the world. However, whilst we know we're bringing the servers closer to the clients, it's very difficult to know how effective this is, and importantly, what difference this makes compared to our competitors. As such, we want to run simple scripts that test latency and throughput for both our service and our competitors, which is easy enough using Amazon, however Amazon only have 7 data centres. We would like to know for example how we perform in locations all over the world such as South Africa, Australia, China, Peru etc. Does anyone know of any service where we could piggy back off their global infrastructure and run some scripts to test this performance? The obvious contenders are people like Monitis, but I don't think they would allow us to run custom scripts, only standard protocol monitors. Thanks for your help. Matt

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • Video codec that can be read on clean installs of either Windows, OS X and Ubuntu

    - by fmercille
    I have to make a video that will need to be watched on different operating systems. Is there a "universal" video codec that can be played on Windows, OS X and Linux without requiring additional plugins or player other than those that comes on a default clean install of each of those systems? Compression is not an issue, I'm merely looking for compatibility (e.g. for audio, I would use WAV as a universal codec). Note : I must assume that the video will be distributed in countries where software patents are enforced, and therefore can't rely on the user to install non-free codecs on Linux. Thanks.

    Read the article

  • How to execute a command on multiple hosts using IPv6 only?

    - by math
    First of all there is pdsh which is essentially a parallel distributed shell which may execute commands on a list of given hosts. However, I find myself in an IPv6 only problem setting. It seems that pdsh is not able to use IPv6, as I am getting error messages: pdsh -w ^hostnames my_command pdsh@myhost: gethostbyname("foobar") failed I also tried to use IPv6 addresses only, which also didn't work. So how do you run a single shell script for administrative purpose (no SGE stuff, or similar) on a bunch of hosts that is IPv6 reachable only?

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • How to get just value from database query in Excel?

    - by Corin
    I'm creating a spreadsheet as a collection point of information from a number of MS Access databases. I will run a query on each database to get a count of records in a particular table. Each database has the same structure but different content as they are used in different situations. So the query returns a single value, rec_count. I've figured out how to create that query, save it and then use it as the data source. So far so good. The problem is that Excel treats the query results as a table. So instead of getting just the single value the query returns, I also get the field name. Thus the result takes up two cells instead of one. When linking in the data source, I only see Table, PivotTable Report and PivotChart as options for viewing the data. I don't want any of those. I just want the single value without any formatting, column headers, etc. Is there a way to do this is Excel 2007?

    Read the article

  • How to open a program on particular desktop?

    - by Vi
    When I start GUI program, it's window appears appears on currently active desktop (essentially, on random desktop). How to make it to appear on the specified desktop? For example, at startup I want certain programs to be started and distributed to desktops. I've already set up config file of openbox to force some programs to always start on specific desktop. Ideally it should be like: start_on_desktop 1 gnome-terminal --tab -e program1 --tab -e program2 start_on_desktop 2 gnome-terminal --tab -e program3 --tab -e program4 start_on_desktop 3 firefox It should be able to start the same program on other desktop. Also dislike when I start program while being on desktop X then switch to desktop Y and SUDDENLY a program which should be on X appears on Y. When I start lots of programs on and switch often between desktops they end up being in chaos and I need to collect them together and redistribute sanely. Also I want the first initial gnome-terminal to be on desktop 3, but I also want subsequent gnome-terminals to be on the desktop where I pressed the keystroke (also configured in openbox) that launches gnome-terminal.

    Read the article

  • Limit WSUS replication to only certain product classifications

    - by MDMarra
    I have four WSUS 3.0 SP2 servers that are geographically distributed. The server at our main site (we'll call it WSUS1), is the main WSUS server. All manual and auto-approvals happen here. The other three WSUS servers are replicas of this server. Currently, we are only controlling desktop OS updates through WSUS. I would like to control server OS updates through WSUS as well. There is no need for all of these server updates to be on WSUS servers at the remote sites. The only server that would need a copy of them is WSUS1. Is there a way to keep my current infrastructure as-is and add server OS updates only to WSUS1, even though the others are set up as replicas, or will I need to configure an additional WSUS server that's not replicated?

    Read the article

  • Virtualbox, merging snapshots and base disk

    - by Henrik
    Hi, I have a virtual machine with about 30 snapshots in branches. The current development path is 22 snapshots plus the base disk. The amount of files is seemingly having an impact now on IO and the dev laptop I'm using (don't know if it is host disk performance issues with the 140GB total size over a lot of fragments, or just the fact that it is hitting sectors distributed across a lot of files). I would like to merge the current development branch of snapshots together with the base disk, but I am unsure if the following command would produce the correct outcome. I am not able to boot this disk after the procedure completes (5-6 hours). vboxmanage clonehd "C:\VPC-Storage\.VirtualBox\Machines\CRM\Snapshots\{245b27ac-e658-470a-b978-8e62137c33b1}.vhd" "E:\crm-20100624.vhd" --format VHD --type normal Could anyone confirm if this is the correct approach or not?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • iptables: How to combine DNAT and SNAT to use a secondary IP address?

    - by Que_273
    There are lots of questions on here about iptables DNAT/SNAT setups but I haven't found one that solves my current problem. I have services bound to the IP address of eth0 (e.g. 192.168.0.20) and I also have a IP address on eth0:0 (192.168.0.40) which is shared with another server. Only one server is active, so this alias interface comes and goes depending on which server is active. In order to get traffic accepted by the service a DNAT rule is used to change the destination IP. iptables -t nat -A PREROUTING -d 192.168.0.40 -p udp --dport 7100 -j DNAT --to-destination 192.168.0.20 I also wish all outbound traffic from this service to appear to come from the shared IP, so that return responses will work in the event of a active-standby failover. iptables -t nat -A POSTROUTING -p udp --sport 7100 -j SNAT --to-source 192.168.0.40 My problem is that the SNAT rule is not always run. Inbound traffic causes a connection tracking entry like this. [root]# conntrack -L -p udp udp 17 170 src=192.168.0.185 dst=192.168.0.40 sport=7100 dport=7100 src=192.168.0.20 dst=192.168.0.185 sport=7100 dport=7100 [ASSURED] mark=0 secmark=0 use=2 which means the POSTROUTING chain is not run and outbound traffic leaves with the real IP address as the source. I am thinking I can set up a NOTRACK rule in the raw table to prevent conntracking for this port number, but is there a better or more efficient way to make this work? Edit - Alternative question: Is there a way (in CentOS/Linux) to have an interface that can be bound to but not used, such that it can be attached to the network or detached when a shared IP address is swapped between servers?

    Read the article

  • Formal separation marker of syslog events?

    - by Server Horror
    I've been looking at RFC5424 to find the formally specified marker that will end a syslog event. Unfortunately I couldn't find it. So If I wanted to implement some small syslog server that reacts on certain messages what is the marker that ends a message (yes commonly an event is a single line, but I just couldn't find it in the specification) Clarification: I call it event because I associate a message with a single line. An event could possibly be some thing like Type: foo Source: webservers whereas a message to me is this: Type: foo Source: webservers http://tools.ietf.org/html/rfc5424#section-6 defines: SYSLOG-MSG = HEADER SP STRUCTURED-DATA [SP MSG] neither STRUCTURED-DATA nor MSG tell me how these fields end. Especially MSG is defined as as MSG-ANY / MSG-UTF8 which expands to virtually anything. There's nothing that says a newline marks the end (or an 8 or an a for that matter). Given the example messages (section 6.5): This is one valid message, or 2 valid messages depending on wether you say that a HEADER element must never occur in any MSG element: literal whitespace <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 - <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \t stands for a tab <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\t<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? \n stands for a newline <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 -\n<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 | is this an end marker? Either I'm misreading the RFC or there just isn't any mention. The sizes specified in the RFC just say what the minimum length is expected that I can work with...

    Read the article

  • iptables drops some packets on port 80 and i don't know the cause.

    - by Janning
    Hi, We are running a firewall with iptables on our Debian Lenny system. I show you only the relevant entries of our firewall. Chain INPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW Chain OUTPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Some packets get dropped each day with log messages like this: Feb 5 15:11:02 host1 kernel: [104332.409003] dropped IN= OUT=eth0 SRC= DST= LEN=1420 TOS=0x00 PREC=0x00 TTL=64 ID=18576 DF PROTO=TCP SPT=80 DPT=59327 WINDOW=54 RES=0x00 ACK URGP=0 for privacy reasons I replaced IP Addresses with and This is no reason for any concern, but I just want to understand what's happening. The web server tries to send a packet to the client, but the firewall somehow came to the conclusion that this packet is "UNRELATED" to any prior traffic. I have set a kernel parameter ip_conntrack_ma to a high enough value to be sure to get all connections tracked by iptables state module: sysctl -w net.ipv4.netfilter.ip_conntrack_max=524288 What's funny about that is I get one connection drop every 20 minutes: 06:34:54 droppedIN= 06:52:10 droppedIN= 07:10:48 droppedIN= 07:30:55 droppedIN= 07:51:29 droppedIN= 08:10:47 droppedIN= 08:31:00 droppedIN= 08:50:52 droppedIN= 09:10:50 droppedIN= 09:30:52 droppedIN= 09:50:49 droppedIN= 10:11:00 droppedIN= 10:30:50 droppedIN= 10:50:56 droppedIN= 11:10:53 droppedIN= 11:31:00 droppedIN= 11:50:49 droppedIN= 12:10:49 droppedIN= 12:30:50 droppedIN= 12:50:51 droppedIN= 13:10:49 droppedIN= 13:30:57 droppedIN= 13:51:01 droppedIN= 14:11:12 droppedIN= 14:31:32 droppedIN= 14:50:59 droppedIN= 15:11:02 droppedIN= That's from today, but on other days it looks like this, too (sometimes the rate varies). What might be the reason? Any help is greatly appreciated. kind regards Janning

    Read the article

  • How to allow simple file sharing on Windows Server 2008R2 through VPN

    - by Martin Wiboe
    We are a small, distributed company with a Windows Server 2008R2 installation. I would like to set up a way for our employees to connect securely to this server via VPN and then be able to map a network drive. I have gotten this to work somewhat by installing the Network Policy and Access Services Role on the server and using the default settings. I have also created a network share on the server. The problem is that our connectivity is sporadic (sometimes the service stops listening on the port or simply refuses to authorize correct credentials) and slow. I can always connect through VPN, but mapping is problematic. I would be grateful for the answer on how to accomplish this as well as some guidance on whether I am on the right track. Thanks in advance!

    Read the article

  • How do I connect remotely to SQL Server from Windows client?

    - by humble_coder
    Hi All, Having a bit of an issue connecting to SQL SERVER remotely from Windows. I've verified that all of my settings are correct via SQL SERVER MANAGEMENT STUDIO EXPRESS and SQL SERVER CONFIGURATION MANAGER. I can connect remotely using ODBC drivers from other OSes (e.g. OS X, Linux, etc). However, when I connect with the same credentials from a remote Windows machine using "SQL SERVER" as the driver I am told that the system cannot connect. I've tried creating an ODBC Data Source and I get the same error: Connection failed: SQLState: '01000' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]ConnectionOpen(InvalidInstance()). Connection failed: SQLState: '08001' SQL Server Error: 14 [Microsoft][ODBC SQL Server Driver][TCP/IP Sockets]Invalid Connection From the non-windows machines I can use the IP address of the SQL Server just fine. However, on the remote Windows machine, neither IP address nor named instance works. FYI, I can create an ODBC Data Source using the named instance on the machine actually running the SQL Server (but this is, of course, nothing special -- just proof that it isn't completely hosed). One interesting note: If I use SQL STUDIO 2005 from a Windows client machine, I can use the IP address to connect remotely. Still, the whole reason I bring this up is because I need to use a software package I've written to connect to SQL Server remotely from Windows machines as well. Previously the solution was only needed to xfer data from SQL Server into a PostGRES or MySQL database on non-Windows machines (due to DBA preference). However, now they also want to move the data from the legacy software to MySQL even on Windows. Any assistance would be most appreciated. Feel free to provide a full example connection string. Best

    Read the article

  • VMWare Workstation 8, Sharing a VM without copying them to "Shared VMs location"

    - by Stebi
    I have a host computer (Win7-64) with three different harddrives. It runs several VMs which are distributed among those harddrives for a better performance. Now with VMWare Workstation 8 it is possible to "share" the VMs. I'd like to use this feature, but I have to convert the VMs to "Shared VMs". When I try this WMWare forces me to copy the VMs to one predefined folder, thus making my effort of distributing the VMs among the harddrives meaningless. Is there any possibility to keep the VMs where they are and share them anyway?

    Read the article

< Previous Page | 373 374 375 376 377 378 379 380 381 382 383 384  | Next Page >