Search Results

Search found 44476 results on 1780 pages for 'wcf test client'.

Page 445/1780 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • DansGuardian/Squid Traffic doesn't get back to user

    - by DKNUCKLES
    I've purchased a Squid appliance that I'm attempting to implement, however the lack of documentation has left me a bit high and dry. Forgive me if this is a silly question, but this is my first attempt at implementing Squid. From what I can ascertain from the documentation (or lack thereof), the users connect to DansGuardian first at port 8080 where the filtering is done, at which point it forwards it to the Squid appliance at port 3128. The traffic is then sent to the internet. The setup I have is as follows Gateway (MikroTik router) : 192.168.88.1 Squid/DansGuardian :192.168.88.100 Client : 192.168.88.238 Client --- Gateway --- Proxy --- Internet I have set up a simple NAT rule to forward all traffic from the client machine (for testing purposes) to go to the DansGuardian. The traffic seems to get there, although I see a lot of SYN_RECV w/ a netstat -antp command on the virtual appliance machine. From this I gather that the traffic is NOT being routed back to the client machine. Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN - tcp 0 0 192.168.88.100:8080 192.168.88.238:55786 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55787 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55785 SYN_RECV - tcp 0 0 192.168.88.100:8080 192.168.88.238:55788 SYN_RECV - tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - Is this a routing issue or an issue with the Squid Appliance?

    Read the article

  • Explorer.exe not starting after login on Windows Server 2003 (Terminal Services and console)

    - by Pepperoni Icecream
    When users login to a Windows Server 2003 R2 running Terminal Services they have a blank desktop. Upon inspection, explorer.exe is not running. When I login as administrator, using either RDP or to the console, I am having the same issue. I can pull up the taskman and start explorer.exe manually. I have another Terminal Server setup exactly the same way (same apps, settings, GPO, etc . . .) the only difference is we deployed Symantec Endpoint Client 11.0.5 on Friday. For some reason the working Terminal Server is still on 11.0.4, but the suspect server received the 11.0.5 client upgrade. I checked the eventviewer for any relevant explorer.exe entries to no avail. It seems that if SEP is preventing explorer.exe from starting at login it would do the same for the domain admin starting explorer.exe from the taskman. I disabled the SEP client and services on the server and issued smc -stop and tried logging in again. Still no explorer.exe. So I'm not sure if the client upgrade is relevant but it is worth mentioning since that was the last system change. The 2 servers are members of a NLB group. I took the bad terminal server out of the group until the issue is resolved. Actually stopped the host using NLB manager Any help is appreciated.

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • VPN from Windows XP to OpenSwan: correct setup?

    - by Gnudiff
    Main question is what I am doing wrong in my OpenSwan or L2TP client setup? I am trying to create a Linux OpenSwan VPN connection from Windows XP machine, using preshared key and the builtin Windows XP L2TP IPsec option. I have followed the instructions in Linux Home networking Wiki for setting up OpenSwan and a guide to making it work with the Windows XP client, but am now stuck. The net setup is as follows: [my windows client, private IP A]<->[f/wall B]<-internet->[g/w X]<->[Linux OpenSwan server Y] A - private subnet /24 B - internet address X - internet address /24 Y - internet address on same subnet as X What I essentially want is for computer with A address to feel and work, as if it was in X subnet for purposes of outgoing and incoming TCP and UDP connections. My OpenSwan setup is as follows: /etc/ipsec.conf (AAA and YYY indicates ip address parts of A and Y addresses): conn net-to-net authby=secret left=B leftsubnet=AAA.AAA.AAA.0/24 leftnexthop=%defaultroute right=Y rightsubnet=YYY.YYY.YYY.0/24 rightnexthop=B auto=start the secret in /etc/ipsec.secrets is listed as: B Y : PSK "0xMysecretkey" where B & Y stand for respective IP adresses of gateway B and linux server Y My L2TP WinXP setup is: IP of destination: Y don't prompt for username security options: typical, require secured pass, don't require data encryption, IPSec PSK set to 0xMysecretkey networking options: VPN Type: L2TP IPSec VPN; TCPIP protocol (with automatic IP address assignment) and QOS packet schedulers enabled The error I get from Windows client is 789: "error during initial negotiation"

    Read the article

  • cURL hangs trying to upload file from stdin

    - by SidneySM
    I'm trying to PUT a file with cURL. This hangs: curl -vvv --digest -u user -T - https://example.com/file.txt < file This does not: curl -vvv --digest -u user -T file https://example.com/file.txt What's going on? * About to connect() to example.com port 443 (#0) * Trying 0.0.0.0... connected * Connected to example.com (0.0.0.0) port 443 (#0) * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=jJakwdOewDicmqzIorLkKSiwuqfnzxF/, C=US, O=*.example.com, OU=GT01234567, OU=See www.example.com/resources/cps (c)10, OU=Domain Control Validated - ExampleSSL(R), CN=*.example.com * start date: 2010-01-26 07:06:33 GMT * expire date: 2011-01-28 11:22:07 GMT * common name: *.example.com (matched) * issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority * SSL certificate verify ok. * Server auth using Digest with user 'user' > PUT /file.txt HTTP/1.1 > User-Agent: curl/7.19.4 (universal-apple-darwin10.0) libcurl/7.19.4 OpenSSL/0.9.8l zlib/1.2.3 > Host: example.com > Accept: */* > Transfer-Encoding: chunked > Expect: 100-continue > < HTTP/1.1 100 Continue

    Read the article

  • Encryption setup for Linux NAS?

    - by Daniel
    There's a bazillion hard disk encryption HOWTOs, but somehow I can't find one that actually does what I want. Which is: I have a home NAS running Ubuntu, which is being accessed by a Linux and a Win XP client. (Hopefully MacOS X soon...) I want to setup encryption for home dirs on the NAS so that: It does not interfere with the boot process (since the NAS it tucked away in a cupboard), the home dirs should be accessible as a regular file system on the client(s) (e.g. via SMB), it is easy to use by 'normal' people, (so it does not require SSH-ing to the NAS, mount the encrypted partition on command line, then connecting via SMB, and finally umount the partition after being done. I can't explain that to my mom, or in fact to anyone.) does not store the encryption key the NAS itself, encrypts file meta-data and content (i.e. safe against the 'RIAA' attack, where an intruder should not be able to identify which songs are in your MP3 collection). What I hoped to do was use Samba + PAM. The idea was that on connecting to the SMB server, I'd have to enter the password on the client, which sends it to the server for authentication, which would use the password to mount the encrpytion partition, and would unmount it again when the session was closed. Turns out that doesn't really work, because SMB does not transmit the password in the plain and hence I can't configure PAM to use the incoming password to mount the encrypted patition. So... anything I'm overlooking? Is there any way in which I can use the password entered on the client (e.g. on SMB connect) to initiate mounting the encrypted dir on the server?

    Read the article

  • Google: "302 Moved" in Firefox

    - by Virtlink
    For some Google search queries executed from the Firefox search bar, or manually by typing in the URL, I get a ''302 Moved'' page. I did a quick virus scan, have checked the hosts file and the plugins and add-ons that are installed in Firefox. Nothing is out of the ordinary. What could be the problem? These URLs (and any URL with google.com, empty and firefox-a in them) show me a 302 Moved page: https://www.google.com/search?q=c%23+empty+array&client=firefox-a https://www.google.com/search?q=empty&client=firefox-a Whereas these URLs work fine: https://www.google.com/search?q=c%23+empty+array (no firefox-a) https://www.google.com/search?q=empty (no firefox-a) https://www.google.com/search?q=c%23+array&client=firefox-a (no empty) https://www.google.nl/search?q=c%23+empty+array&client=firefox-a (no .com) By default Google redirects my queries to their .nl website, and uses HTTPS. I am currently executing a full system virus scan using Security Essentials. My Firefox plugins were up-to-date. No unfamiliar Firefox plugins or add-ons found. Restarting Firefox did not solve the issue. The issue does not occur in Internet Explorer. The hosts file did not contain any unfamiliar entries.

    Read the article

  • Perl wrapper to start daemon leaves zombie when run by cron

    - by leonstr
    I've got a Perl script to start a process as a daemon. But when I call it from cron I'm left with a defunct process. I've stripped this down to a minimal script, I'm starting 'tail' as a placeholder for the daemon: use POSIX "setsid"; $SIG{CHLD} = 'IGNORE'; my $pid = fork(); exit(0) if ($pid > 0); (setsid() != -1) || die "Can't start a new session: $!"; open (STDIN, '/dev/null') or die ("Cannot read /dev/null: $!\n"); my $logout = "logger -t test"; open (STDOUT, "|$logout") or die ("Cannot pipe stdout to $logout: $!\n"); open (STDERR, "|$logout") or die ("Cannot pipe stderr to $logout: $!\n"); my $cmd = "tail -f"; exec($cmd); exit(1); I run this with cron and end up with: root 18616 18615 0 11:40 ? 00:00:00 [test.pl] <defunct> root 18617 1 0 11:40 ? 00:00:00 tail -f root 18618 18617 0 11:40 ? 00:00:00 logger -t test root 18619 18617 0 11:40 ? 00:00:00 logger -t test As far as I can tell it's the piping to logger that it doesn't like, if I send STDOUT and STDERR to /dev/null the problem doesn't occur. Am I doing something wrong or is this just not possible? (CentOS 5.8) Thanks, leonstr

    Read the article

  • Lotus Domino - DAOS not reducing file size?

    - by SydxPages
    I have implemented DAOS on a Lotus Domino Server (8.5.3 FP2) as follows: Lotus Domino Server Document: Store file attachments in DAOS: Enabled Minimum size of object before Domino will store in DAOS: 64000 bytes DAOS base path: E:\DAOS Defer object deletion for: 30 days Transaction logging is running, and the specific test database has the following advanced properties set: Domino Attachment and Object Service (ticked) Use LZ1 compression for atachments Compress Database Design Compress Data I have restarted the server. When I run a compact -c, it compacts the database, but does not reduce the size. I have checked the DB in Windows Explorer (60Gb) and the size is the same pre and post. I have checked the directory (E:\DAOS) and it is 35Gb in size. When I run the command 'Tell DAOSMgr Status tmp\test.nsf', I get the following response. From looking up on the net, I believe ticket count = 0 means that the db is not really DAOS'ed? Admin Process: Searching Administration Requests database DAOSMGR: Status tmptest.nsf started DAOS database status: Database: E:\Lotus\Domino\Data\tmp\test.nsf Database state = Synchronized Last resynchronized: 03/09/2012 02:49:13 PM Ticket count: 0 DAOSMGR: Status tmp\test.nsf completed I have run fixup on the database. When I have tried to run the DAOS estimator it has always crashed. This was a problem with larger databases on earlier versions of domino, but not anymore. Can anyone tell me why the size has not reduced? Am I missing anything?

    Read the article

  • Why am I getting blank error messages in my Apache error log?

    - by Jason Lamoreux
    I am running Apache 2.2 on 64bit Windows Server 2008 Std edition with ActivePerl 5.8.9. My error log is filling up with blank error messages like these: [Wed Mar 31 14:08:31 2010] [error] [client 10.6.1.164] [Wed Mar 31 14:10:32 2010] [error] [client 10.6.1.89] [Wed Mar 31 14:13:20 2010] [error] [client 10.6.1.131] By looking in the access log I can tell that it occurs when our client machines issue a GET to a very simple Perl script. #!perl.exe use strict; no warnings; $|=1; use CGI::Carp('fatalsToBrowser'); use CGI qw(:standard); print header; my $CRLF = "\r\n<br>"; my $Port = '10116'; print "Success!${CRLF}PollInterval=5${CRLF}LMProMode${CRLF}Version=7${CRLF}ConnectionPort=$Port"; exit; The weird thing is that it does not look like this error message is inserted every time a GET to this Perl script occurs. What could cause this error message to appear in the Apache error log?

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • No option to select ASP.Net Version in IIS 6?

    - by GenericTypeTea
    I'm running Windows Server 2003 64bit edition. I've just installed the .Net 4 Framework in order to get a new WCF service up and running. However, I have no options anywhere in IIS 6 for selecting the ASP.Net framework version. I.e. Right click Properties on the website should have as ASP.Net tab from where I should be able to select v2 or v4. Does anyone know why they're not there and how I can make them appear? For the time being I've had to go into Website Properties Home Directory Configuration and change the .svc extension to use v4.0.30319 instead. So, everything's now working for my WCF service, however every other extension is set to v2. How can I get the tab? It's not visible on any of my 23 websites.

    Read the article

  • Nagios command not transmitting all arguments

    - by markus
    I'm using the following service to monitor our postgres db from nagios: define service{ use test-service ; Name of servi$ host_name DEMOCGN002 service_description Postgres State check_command check_nrpe!check_pgsql!192.168.1.135!test!test!test notifications_enabled 1 } On the remote machine I've configured the command: command[check_pgsql]=/usr/lib/nagios/plugins/check_pgsql -H $ARG1$ -d $ARG2$ -l $ARG3$ -p $ARG4$ In the syslog I can see that command is executed, but there is only one argument transmitted: Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Running command: /usr/lib/nagios/plugins/check_pgsql -H 192.168.1.134 -d -l -p Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Command completed with return code 3 and output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Return Code: 3, Output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Why are arguments 2,3 and 4 missing?

    Read the article

  • windows clients cannot get dns resolution until you open and close ipv4 properties page

    - by GC78
    This strange problem has started recently. Some windows clients cannot seem to get dns resolution to the internet after boot, and sometimes again at some point in the day. Internal hosts are also slow to resolve. trying to ping an interal host by name will take a long time for the hostname to resolve to ip address and trying to ping a website by name will fail to resolve. If you go into the tcp/ip v4 properties and view but not change anything, ok/close out of that then the client starts working fine, hostnames will resolve quickly. I have seen this happen on both Vista and W7 clients. ipconfig /all at a client experiencing this problem shows everything in order. proper ip addr, gateway, dns server, dns suffix ect.. ipconfig /dnsflush will not fix them, neither will /release and /renew the clients get their ip address, mask and dns server info from either one of 2 OES dhcp servers that assign addresses in different scopes in the same subnet. the internal dns server is a different OES dns server the default gateway is not assigned by the OES server but is statically put in at the client (only for those who need to get to the Internet for their job) flat network topology What can I do to get to the bottom of this? It only happens to a few of the client machines and typically the same ones. It started happening when we made a change to one of the DHCP scopes in iManager. Strangly this problem only happens to clients that get an IP address from the scope that we didn't make any changes to.

    Read the article

  • Passive FTP Server Port Configuration Troubles Win2003

    - by Chris
    Win2003 Ports 20 & 21 are open IIS6 - Direct Metabase Edit enabled Configured FTP service passive range to 5500-5550 5500-5550 added to windows firewall iisreset and double checked by restarting ftp service nothing has changed, when I connect and enter passive, it still hangs when ever I try to LIST or transfer files. Active is just as useless. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\user>ftp ftp> open x.x.x.x Connected to x.x.x.x. 220-Microsoft FTP Service xxxxxxxxxxxxxxxxxx 220 xxxxxxxxxxxxxxxxxx User (x.x.x.x:(none)): user 331 Password required for user. Password: 230-YOUR ACTIVITY IS BEING RECORDED TO THE FULLEST EXTENT 230 User user logged in. ftp> QUOTE PASV 227 Entering Passive Mode (82,19,25,134,21,124) ftp> ls 200 PORT command successful. 150 Opening ASCII mode data connection for file list. and it hangs.. Now I can see from microsooft documentation that on newer windows releases, additional steps such as these are suggested, but they dont work on 2003... netsh advfirewall firewall add rule name=”FTP Service” action=allow service=ftpsvc protocol=TCP dir=in netsh advfirewall set global StatefulFTP disable is there anything I am missing, what is this StatefulFTP malarkey at the end EDIT I can connect and transfer binary files using WinSCP client - Therefore the problem must be with my ftp commands no? Can anyone see anything wrong with my windows ftp client example? why would it hang on ls, i tried QUOTE LIST as well, and that just hangs, and the windows ftp client doesnt work in active, it hangs if I try to go "binary" then put - This worked before I added 5500-5550 on the router. I have since added this range to the router but no difference to the windows ftp client.

    Read the article

  • Understanding Exchange User Monitor (ExMon) Output

    - by SturdyErde
    I recently downloaded and ran ExMon while trying to troubleshoot Outlook connectivity problems due to high CPU usage on Exchange Server 2010 SP2 UR8. The tool provides a great set of data, but I have not yet figured out how to make great use of it. My first question is why the Exchange Server itself shows up as a high-use MAPI client in the ExMon data. Among the users' client versions I see build numbers listed for Outlook 2013, 2010, and yes, even 2007 clients. I also see build number 14.2.387.0, which represents Exchange Server 2010 SP2 Update Rollup 8 (+/- some other patch that makes it not quite match the UR8 number). There are many user rows that list only "::1" and/or the short hostname of my Exchange server in the 'Client IP Addresses' column. Some other columns include the end-user's actual IP address and the Exchange server's IP address. ExMon shows that it is actually Exchange Server that is utilizing the highest percentage of CPU that is used for MAPI calls. I had expected to see 1 IP address and version number for each user reported by ExMon. Instead, most records show multiple version #'s (Exchange ver and Outlook ver) and multiple IPs (Exchange IP and client IP). Can anyone explain the reason for this to me, please?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • simple apache2 reverse proxy setup not working

    - by Nick
    I know what proxy is (very high level), it's just I have never set up one, and it feels like I might be missing some big fat point here. My setup: client server (static IP), runs apache on port 80 proxy (has 2 network cards, one is on the clients network, the other one with a static IP on the server network), runs apache on port 80 I am trying to configure these three machines so that when client requests: http://proxy/machine1 It gets served server's pages at server root URL, i.e. http://server/ I can access client pages just fine. However, when I try accessing a page from the client machine, it simply gets redirected to server's IP address, which it clearly can't access since they are not on the same network: ... <meta http-equiv="REFRESH" content="0;url=http://server/machine1"></meta> <title>Redirect</title> ... My apache2 config is: LoadModule proxy_module /modules/mod_proxy.so LoadModule proxy_http_module /modules/mod_proxy_http.so ProxyRequests off <Proxy *> Order Allow,Deny Allow from all </Proxy> ProxyPass /machine1 http://server:80 <Location /machine1> ProxyPassReverse / </Location> What gives? Thanks!

    Read the article

  • Server 2003 Terminal Services Printers not redirecting, no sessions created.

    - by mikerdz
    Ok, odd scenario on a Windows Server 2003 Server Standard running as Terminal Server. Friday, installed 2 new Windows 7 machines to replace older XP machines. After adding these machines and their local printers, none of the otehr 16 Windows 7 machines can redirect printing to the server. I have checked Global Policy on domain controller, nothing is being blocked. In Terminal Services Manager, the client settings are set to User Client Settings. On RDP client, port redirection is enabled. I have tried disabling the Use Client Settings option and manually selected the options for print redirection and default printer connection, but still does not work. After some reaserching, I found this MS article: http://support.microsoft.com/kb/2492632 I went ahead and added the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\Wds\rdpwd\fEnablePrintRDR DWORD that the article references and set it to "1" to enable the option. I restarted the server, but still would not print. I am getting quite desperate with this issue because nothing seems to have changed when installing the two new clients and printers. I uninstalled the print drivers for the printers from the server. I have even gone as far as connecting each of the printers manually via UPD (\computername\printer) but even thought it works, it prints awfully slow. Please help!!!!

    Read the article

  • Mac "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Enabling CURL on Ubuntu 11.10

    - by Afsheen Khosravian
    I have installed curl: sudo apt-get install curl libcurl3 libcurl3-dev php5-curl and I have updated my php.ini file to include(I also tried .so): extension=php_curl.dll To test if curl is working I created a file called testCurl.php which contains the following: <?php echo ‘<pre>’; var_dump(curl_version()); echo ‘</pre>’; ?> When I navigate to localhost/testCurl.php I get an error: HTTP Error 500 Heres a snippet from the error log: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/php_curl.dll' - /usr/lib/php5/20090626+lfs/php_curl.dll: cannot op$ PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626+lfs/sqlite.so' - /usr/lib/php5/20090626+lfs/sqlite.so: cannot open sha$ [Sun Dec 25 12:10:17 2011] [notice] Apache/2.2.20 (Ubuntu) PHP/5.3.6-13ubuntu3.3 with Suhosin-Patch configured -- resuming normal operations [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/ [Sun Dec 25 12:13:46 2011] [error] [client 127.0.0.1] File does not exist: /var/www/css, referer: http://localhost/` Can anyone help me to get curl working? The problem was with the original test code. I used a new test file containing this and curl is now working: <?php ## Test if cURL is working ## ## SCRIPT BY WWW.WEBUNE.COM (please do not remove)## echo '<pre>'; var_dump(curl_version()); echo '</pre>'; ?>

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Ubuntu init.d script not being called on startup

    - by Mike
    I've got a script in ubuntu 9.04 in init.d that I've set to run on start on with update-rc.d using update-rc.d init_test defaults 99. All of the symlinks are there and the permissions appear to be correct -rwxr-xr-x 1 root root 642 2010-10-28 16:44 init_test mike@xxxxxxxxxx:~$ find /etc -name S99* | grep init_test find: /etc/rc5.d/S99init_test find: /etc/rc4.d/S99init_test find: /etc/rc2.d/S99init_test find: /etc/rc3.d/S99init_test The script runs through source and ./ without issue and behaves correctly. Here is the source of the script: #!/bin/bash ### BEGIN INIT INFO # Provides: init test script # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO start() { echo "hi" echo "start called" >> /tmp/test.log return } stop() { echo "Stopping" } echo "Script called" >> /tmp/test.log case "$1" in start) start ;; stop) stop ;; *) echo "Usage: {start|stop|restart}" exit 1 ;; esac exit $? When the machine starts, I don't see "script called" or "start called" in the test.log at all. Is there anything obvious I'm messing up?

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >