Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 508/1048 | < Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >

  • Need to have access to my office PC from my laptop hopping through two VPN servers

    - by Andriy Yurchuk
    Here's the illustration of what I have ( http://clip2net.com/s/2fvar ): My office PC with it's IP of 123.45.e.f. Office VPN, which I will connect to from my VPS to get to my office PC. My own VPS, which I use as a: client to connect to office VPN (through vpnc, which creates a tun0 with 123.45.c.d IP address); VPN server my laptop can connect to (OpenVPN, tun1, 10.8.0.1) My own laptop I will use as a VPN client to connect to VPS OpenVPN server (will create a tun0 with 10.8.0.2 IP address) Now what I have to do is to allow my laptop to connect to at least my office PC, but preferably to all the 123.45.x.x subnet. Please advice on how to best configure OpenVPN, routing, iptables or whatever else is needed on my VPS so that my laptop could gain access to my office PC. P.S. The reason I'm hopping through my VPS is that being connected to the office WiFi I cannot access my office PC and I cannot connect to office VPN (which is another way to access my office PC). The only way to access my PC from office WiFi I have is hopping though an outside network.

    Read the article

  • MySQL ODBC + SSL with only the SSL Cipher option?

    - by sdek
    Does anybody know how I can have an SSL encrypted connection over MySQL ODBC without the cert options? So I asked my web host to setup a MySQL+SSL connection so that we can access our website's database via ODBC or MySQL Query Browser (or the likes). I am able to get an encrypted connection with the standard mysql client and MySQL Query Browser, but I can't get the ODBC connection to work. Looking for a little help... The way they set it up is a little different from the way I have read about on the interweb. The host didn't setup a cert, or at least I don't think so - I don't need to specify any cert options in my connection. I just need to specify the ssl cipher. Here is how I connect with the mysql client: mysql -h myhost.com -u myuser --ssl-cipher=3DES -p That works to get an encrypted connection. At least I am pretty sure it works because when I run mysql> \s I get SSL: Cipher in use is EDH-RSA-DES-CBC3-SHA Also, when I put EDH-RSA-DES-CBC3-SHA into the SSL Cipher field of MySQL Query Browser (without specifying any other SSL options) it connects just fine. But then when I try to do the same thing with my MySQL ODBC 3.5.1 and 5.1 I get a generic error. Here is the error from the 5.1 Driver. Connection Failed: [HY000] [MySQL][ODBC 5.1 Driver]SSL connection error

    Read the article

  • Mysql Error 2002 (HY000) on Snow Leopard

    - by Ole Media
    My boss update my computer to Snow Leopard, after the update we had a set back and deleted a few files/folders, since then is just nightmare after another one. I finally getting things back but I'm still having problems with MySQL. This is what I did: Deleted ALL of mysql files/folders Download and installed the packages mysql-5.1.45-osx10.6-x86_64.dmg installed the Startup item and the preferences panel After the above, I tried to start MySQL from the preferences panel without luck, and running the following command from Terminal /usr/local/mysql/bin/mysql I get the following result ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) I looked at some other post for possible solutions, but what they does not exactly fits my problem, so I cannot find a solution. I'm new to all this and your help will be much appreciated.

    Read the article

  • VPN Setup: Mac OS X and SonicWall

    - by noloader
    I'm trying to get VPN access up and running. The company has a SonicWall firewall/concentrator and I'm working on a Mac. I'm not sure of the SonicWall's hardware or software level. My MacBook Pro is OS X 10.8, x64, fully patched. The Mac Networking applet claims the remote server is not responding. The connection attempt subsequently fails: This is utter bullshit, as a Wireshark trace shows the Protected Mode negotiation, and then the fallback to Quick Mode: I have two questions (1) does Mac OS X VPN work in real life? (2) Are there any trustworthy (non-Apple) tools to test and diagnose the connection problem (Wireshark is a cannon and I have to interpret the results)? And a third question (off topic): what is broken in Cupertino such that so much broken software gets past their QA department? EDIT (12/14/2012): The network guy sent me "VPN Configuration Guide" (Equinox document SonicOS_Standard-6-EN). It seems an IPSec VPN now requires a Firewall Unique Identifier. Just to be sure, I revisited RFC 2409, where Main Mode, Aggressive Mode, and Quick Mode are discussed. I cannot find a reference to Firewall Unique Identifier. I think I am screwed here: I am trying to connect to a broken (non-standard) firewall, with a broken Mac OS X client. Fortunately, I can purchase VPN Tracker Personal (a {SonicWall|Equinox}-authored client) for $129US from Equinox. So much for standards....

    Read the article

  • Setting Up My Home Network

    - by Skizz
    I currently have five PCs at home, three running WinXP and two running Ubuntu. They are set up like this: ISP ----- Modem ---- Switch ---- Ubuntu1 -- B&W Printer | |--WinXP1 | |--WinXP2 Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) The Ubuntu1 machine is set up as a PDC using Samba and runs fetchmail, procmail, dovecot to get my e-mail and allow me to access the e-mail via imap so I can read the e-mail on any PC. I'd like to set up the network like this: ISP ----- Modem ---- Ubuntu1 ---- Switch ------WinXP1 | | |--WinXP2 B&W Printer Wireless |--Colour Printer | |---------Ubuntu2 |---------WinXP3 (laptop) My questions are: How to configure Ubuntu1 to act as a firewall. How to configure Ubuntu1 to provide a consistant user authentication across the network, at the moment Samba provides roaming profiles for the XP machines but the Ubuntu2 machine has it's own user lists. I'd like to have a single authentication for both XP machines and linux machines so that users added to the server list will propagate to all PCs (i.e. new users can log on using any PC without modifying any of the client PCs). How to configure a linux client (Ubuntu2 above) to access files on the server (Ubuntu1), some of which are in user specific folders, effectively sharing /home/{user} per user (read and write access) and stuff like /home/media/photos with read access for everyone and limited write access. How to configure the XP machines (if it is different from a the Samba method). How to set up e-mail filtering. I'd like to have a whitelist/blacklist system for incoming e-mails for some of the e-mail accounts (mainly, my kids' accounts) with filtered e-mails being put into quaranteen until a sysadmin either adds the sender to a blacklist or whitelist. OK, that's a lot of stuff. For now, I don't want config files*, rather, what services / applications to use and how they interact. For example, LDAP could be used for authentication but what else would be useful to make the administration of the LDAP easier. Once I have a general idea for the overall configuration, I can ask other questions about the specifics. Skizz I have looked around for information, but most answers are usually in the form of abstract config files and lists of packages to install.

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • 500 Internal Server Error when setting up Apache on localhost

    - by Martin Hoe
    I downloaded and installed XAMPP, and to keep my projects nicely separated I want to create a VirtualHost for each one based on its future domain name. For example, in my first project (we'll say it's project.com) I've put this in my Apache configuration: NameVirtualHost 127.0.0.1 <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/ ServerName localhost ServerAdmin admin@localhost </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/sub/ ServerName sub.project.com ServerAdmin [email protected] </VirtualHost> <VirtualHost 127.0.0.1:80> DocumentRoot C:/xampp/htdocs/project/ ServerName project.com ServerAdmin [email protected] </VirtualHost> And this in my hosts file: # development 127.0.0.1 localhost 127.0.0.1 project.org 127.0.0.1 sub.project.org When I go to project.com in my browser, the project loads up successfully. Same if I go to sub.project.com. But, if I navigate to: http://project.com/register (one of my site pages) I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. The error log shows this: [Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Sun May 20 02:05:54 2012] [error] [client 127.0.0.1] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://project.com/ Any idea what config items I got wrong or how to get this working? It happens on any page that's not in in the root directory of project.com. Thanks.

    Read the article

  • PEAP validating a secondary domain suffix

    - by sam
    Probably the title is a little bit confusing, let me explain the situation. Our company wants to implement a corporate wireless lan with PEAP authentication. unfortunately someone made a big mistake in our AD design 10 years ago. The domain name we are using "company.ch" is not owned by company but by someone else. so it is not possible to issue a public SSL certificate for the RADIUS server. Our AD is to big to rename it. We already thought about using our private PKI and rollout the CA certificate via GPO but that would only cover our corporate managed clients but not the BYOD (Smartphones, Tablets, Laptops..) Is there a way to add a secondary domain name like “company2.ch” and issue a public certificate and join that radius to that secondary domain aslwell, and configure that secondary dns suffix via DHCP for all the client pools... or is there another way with for example a new radius server which has his own domain company2.ch which is connected with some kind of trust between the company.ch doamin? sorry i'am not a client server guy.. hopefully you get my drift.!?

    Read the article

  • can't ssh from mac to windows (running ssh server on cygwin)

    - by Denise
    I set up an ssh server on a fresh windows 7 machine using the latest version of cygwin. Disabled the firewall. I can ssh into it from itself, from a different windows box (using winssh), and from a linux vm. In spite of that, I tried to ssh in from two different macs, and neither would let me! This is the debug output: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 3dbuild [172.18.4.219] port 22. debug1: Connection established. debug1: identity file /Users/Denise/.ssh/identity type -1 debug1: identity file /Users/Denise/.ssh/id_rsa type 1 debug1: identity file /Users/Denise/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.5 debug1: match: OpenSSH_5.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '3dbuild' is known and matches the RSA host key. debug1: Found key in /Users/Denise/.ssh/known_hosts:43 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /Users/Denise/.ssh/identity debug1: Offering public key: /Users/Denise/.ssh/id_rsa Connection closed by [ip] It shows the same output, and fails at the same place, whether I have put my public key on the ssh server or not. Any help would be appreciated-- hopefully someone has run into this before?

    Read the article

  • memory tuning with rails/unicorn running on ubuntu

    - by user970193
    I am running unicorn on Ubuntu 11, Rails 3.0, and Ruby 1.8.7. It is an 8 core ec2 box, and I am running 15 workers. CPU never seems to get pinned, and I seem to be handling requests pretty nicely. My question concerns memory usage, and what concerns I should have with what I am seeing. (if any) Here is the scenario: Under constant load (about 15 reqs/sec coming in from nginx), over the course of an hour, each server in the 3 server cluster loses about 100MB / hour. This is a linear slope for about 6 hours, then it appears to level out, but still maybe appear to lose about 10MB/hour. If I drop my page caches using the linux command echo 1 /proc/sys/vm/drop_caches, the available free memory shoots back up to what it was when I started the unicorns, and the memory loss pattern begins again over the hours. Before: total used free shared buffers cached Mem: 7130244 5005376 2124868 0 113628 422856 -/+ buffers/cache: 4468892 2661352 Swap: 33554428 0 33554428 After: total used free shared buffers cached Mem: 7130244 4467144 2663100 0 228 11172 -/+ buffers/cache: 4455744 2674500 Swap: 33554428 0 33554428 My Ruby code does use memoizations and I'm assuming Ruby/Rails/Unicorn is keeping its own caches... what I'm wondering is should I be worried about this behaviour? FWIW, my Unicorn config: worker_processes 15 listen "#{CAPISTRANO_ROOT}/shared/pids/unicorn_socket", :backlog = 1024 listen 8080, :tcp_nopush = true timeout 180 pid "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid" GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true before_fork do |server, worker| STDERR.puts "XXXXXXXXXXXXXXXXXXX BEFORE FORK" print_gemfile_location defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! defined?(Resque) and Resque.redis.client.disconnect old_pid = "#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # already killed end end File.open("#{CAPISTRANO_ROOT}/shared/pids/unicorn.pid.ok", "w"){|f| f.print($$.to_s)} end after_fork do |server, worker| defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection defined?(Resque) and Resque.redis.client.connect end Is there a need to experiment enforcing more stringent garbage collection using OobGC (http://unicorn.bogomips.org/Unicorn/OobGC.html)? Or is this just normal behaviour, and when/as the system needs more memory, it will empty the caches by itself, without me manually running that cache command? Basically, is this normal, expected behaviour? tia

    Read the article

  • Bad Mumble control channel performance in KVM guest

    - by aef
    I'm running a Mumble server (Murmur) on a Debian Wheezy Beta 4 KVM guest which runs on a Debian Wheezy Beta 4 KVM hypervisor. The guest machines are attached to a bridge device on the hypervisor system through Virtio network interfaces. The Hypervisor is attached to a 100Mbit/s uplink and does IP-routing between the guest machines and the remaining Internet. In this setup we're experiencing a clearly recognizable lag between double-clicking a channel in the client and the channel joining action happening. This happens with a lot of different clients between 1.2.3 and 1.2.4 on Linux and Windows systems. Voice quality and latency seems to be completely unaffected by this. Most of the times the client's information dialog states a 16ms latency for both the voice and control channel. The deviation for the control channels mostly is a lot higher than the one of the voice channels. In some situations the control channel is displayed with a 100ms ping and about 1000 deviation. It seems the TCP performance is a problem here. We had no problems on an earlier setup which was in principle quite like the new one. We used Debian Lenny based Xen hypervisor and a soft-virtualised guest machine instead and an earlier version of the Mumble 1.2.3 series. The current murmurd --version says: 1.2.3-349-g315b5f5-2.1

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • Too many concurrent connections Exchange 2010. What else is there to check?

    - by hydroparadise
    I thought that I had this under control before. But for some reason during our last email marketing promo, I start receiving from our mass email client (built in house).. The message could not be sent to the SMTP server. The transport error code is 0x800ccc67. The server repsonse was 421 4.3.2 The maximum number of concurrent connections has exceeded a limit, closing transmission channel again. There's several places I've checked to make sure that wouldn't be an issue. First I checked that receive connector was set to receive an adequate number of connections on our relay connector (1000 connections). Then, I would later find out about Throttling Policies. I created one and set all the properties I knew to set in terms of the policy following properties to 1000; EWSMaxConcurrency, OWAMaxConcurrency, CPAMaxConcurrency, and CPAMaxConcurrency. Still, the email client starts receiving the error shortly after 100 has been sent and takes about 15-30 seconds. The process is then repeatable, but still the error gets received at the same spot everytime. Is there a rate setting that I am missing? Was there a windows update that I missed looking at? Should the software have it's own throttling feature?

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • Why am I getting this error in the logs?

    - by Matt
    Ok so I just started a new ubuntu server 11.10 and i added the vhost and all seems ok ...I also restarted apache but when i visit the browser i get a blank page the server ip is http://23.21.197.126/ but when i tail the log tail -f /var/log/apache2/error.log [Wed Feb 01 02:19:20 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs [Wed Feb 01 02:19:24 2012] [error] [client 208.104.53.51] File does not exist: /etc/apache2/htdocs but my only file in sites-enabled is this <VirtualHost 23.21.197.126:80> ServerAdmin [email protected] ServerName logicxl.com # ServerAlias DocumentRoot /srv/crm/current/public ErrorLog /srv/crm/logs/error.log <Directory "/srv/crm/current/public"> Order allow,deny Allow from all </Directory> </VirtualHost> is there something i am missing .....the document root should be /srv/crm/current/public and not /etc/apache2/htdocs as the error suggests Any ideas on how to fix this UPDATE sudo apache2ctl -S VirtualHost configuration: 23.21.197.126:80 is a NameVirtualHost default server logicxl.com (/etc/apache2/sites-enabled/crm:1) port 80 namevhost logicxl.com (/etc/apache2/sites-enabled/crm:1) Syntax OK UPDATE <VirtualHost *:80> ServerAdmin [email protected] ServerName logicxl.com DocumentRoot /srv/crm/current/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /srv/crm/current/public/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

  • Connect linux server to VPN server via PPTP

    - by wowpatrick
    I'm trying to connect a Linux (Ubuntu 10.04 LST) server to a VPN server via the PPTP client to an VPN server. I configured the PPTP client as said in the documentation. The connection is correctly added as an interface, but somehow the connection dose not work. ping -I ppp0 google.com dose not return anything and traceroute -i ppp0 only shows the first hop, and then displays nothing. Any ideas of what is going wrong? Incorrect routing configuration? ifconfig output for the configured interface: ppp0 Link encap:Point-to-Point Protocol inet addr:xx.x.xxx.xxx P-t-P:10.0.0.1 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1496 Metric:1 RX packets:415 errors:0 dropped:0 overruns:0 frame:0 TX packets:468 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:3 RX bytes:31428 (31.4 KB) TX bytes:32394 (32.3 KB) route output Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface xx.x.x.1 * 255.255.255.255 UH 0 0 0 ppp0 xx.xxx.xxx.xx sp.ip 255.255.255.255 UGH 0 0 0 eth1 192.168.3.0 * 255.255.255.0 U 0 0 0 eth2 192.168.2.0 * 255.255.255.0 U 0 0 0 eth1 default sp.ip 0.0.0.0 UG 100 0 0 eth1

    Read the article

  • How can I print from my lion mac mini to my windows XP, with simple file sharing?

    - by Jules
    I have quite a complicated setup, perhaps. And a lot of history on this issue, I'm hoping that I don't have to buy a new printer. I've got a HP Wireless USB Print Server, which requires client software, I can't just use it as an IP Printer. The HP software is pretty poor on the mac and is no longer supported and often locks up the printer server and takes some considerable effort to actually print something. Let alone if a windows machine attaches to it first. My printer is an Epson Stylus R285. However, the windows client software is fine and we can print from windows 7 / XP without problem. We have simple file sharing setup as this is the only way I could get windows XP to talk to windows 7. However, I can't seem to get my mac mini to connect as anything other than a guest to my xp machine, to connect to the shared printer. I'm not considering some kind of internet printing as this would seems the simplest solution. But I'm not sure what will work with my setup ?

    Read the article

  • Writing scripts that work with my emails

    - by queueoverflow
    I currently use Thunderbird as my email client and it has some filters, but that seems to be all I can program in it. On several occasions, I heard people talk about their automated email workflow. One example: When I do not get a reply to an email the script will send a “nag” email asking why I did not get a response yet. Or another one: I get so much mail that I cannot read them all. After a week, unread email is put on hold and the sender gets a “if it was important, reply to this email and it will be set to un-hold” email. The script then takes the answer and move it to back into the important folder. I read about FiltaQuilla which seems nice, but it does not seem to be the kind of programming that I am looking for. How can I write general purpose scripts like those? Do I need to write my own Python IMAP/SMTP client (if that is even possible) to to this or can I script it it, say JavaScript, in Thunderbird?

    Read the article

  • What file transfer protocols can be used for PXE booting besides TFTP?

    - by Stefan Lasiewski
    According to ISC's dhcpd manpage: The filename statement filename "filename"; The filename statement can be used to specify the name of the initial boot file which is to be loaded by a client. The filename should be a filename recognizable to whatever file transfer protocol the client can be expected to use to load the file. My questions are: What file transfer protocols, besides tftp, are available to load the file (e.g. What protocols "can be expected to" load the file)? How can I tell? Can I see a list of these protocols? Does my choice of DHCP server influence which file transfer protocols are in use? Pretend I want to use dnsmasq instead of ISC's dhcpd Are these features dependent on the PXE which is in use (e.g. My Intel NICs use an Intel ROM)? I know that some PXE-variants, such as iPXE/gPXE/Etherboot, can also load files over HTTP. However, the PXE rom needs to be replaced with the iPXE image, either by chainloading or by burning the PXE rom onto the NIC. For example, the iPXE Howto "Using ISC dhcpd" says: ISC dhcpd is configured using the file /etc/dhcpd.conf. You can instruct iPXE to boot using the filename directive: filename "pxelinux.0"; or filename "http://boot.ipxe.org/demo/boot.php";

    Read the article

  • What options to use for Accurate bacula backup ?

    - by Kiss Stefan
    It's actually 2 question in one. First is a bit more theoretically. So when specifying accurate options how does bacula figure out if a file needs to be backed up ? it's a simple AND ? As in if the options are Accurate = sm5 bacula will not backup the file if ((size = old size) AND (modtime = old modtime) AND (md5 = old md5)) Is that correct ? Do any of the options take precedence ? as in would be a file skipped if modif time is diffreent but it has the same md5sum ? Are there any implied options that you cannot ignore ? Practical case, ( bacula 5.0.1 ) I have to back-up a svn repo, in order to be able to make incremental backups as simple as posible i am hotcopying (client run before) it to another location, that bacula will backup ( then delete it with client run after). Now in the fileset i have Accurate = spnd5 This should tell bacula to take into consideration size , permission bits number of links , decreases in size and md5sum. However , an incremental is also including a full copy of the svn. What am i doing wrong ? it seems that it takes into account creation time even tho i have not specified it.

    Read the article

  • Outlook Signature Broken in Entourage

    - by Eric J.
    Some of our company uses Windows with Outlook 2010, and the rest use Mac with Entourage. When our standard signature line is included in an email that goes to Entourage, the result does not display correctly. It appears that Entourage is mangling the HTML. My working theory is that Entourage encounters inline CSS styles it does not know about and stops processing styles, but I'm really not sure. Question: How can I enter a signature into Outlook 2010 that will render correctly in Entourage? For example, can I specify somehow the exact HTML to use? Here's an example of how the HTML is being changed. Original on Outlook, as received by another Outlook client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#1785C5'>My Company<br> </span></b><span class=apple-style-span><span style='font-size:9.0pt; font-family:"Century Gothic","sans-serif";color:#666666'>123 Main St.</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; color:#AFAFAF'>&nbsp;</span></span><span class=apple-style-span><span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif";color:#666666'>Suite 100</span></span> Note the use of spans, color #1785C5 and color #666666. Same original email, as displayed in an Entourage client: <span style='font-size:9.0pt;font-family:"Century Gothic","sans-serif"; mso-fareast-font-family:"Times New Roman"'><br> <span style='color:#656565'>My Company<br> 123 Main St Suite 100<br> </span> Note the use of br tags rather than spans, and the color #656565.

    Read the article

  • php5-fpm.sock file doesn't exist

    - by Caballero
    I've just compiled and installed PHP-FPM 5.5.5 following this tutorial. I have ignored the apache setup section, because I'm running nginx. Everything seems to be fine: php -v PHP 5.5.5 (cli) (built: Oct 18 2013 21:56:02) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.5.0, Copyright (c) 1998-2013 Zend Technologies Problem is, I need to link it to my nginx conf via a socket, but /var/run/php5-fpm.sock file doesn't exist. How do I create it? The file /etc/php5/fpm/pool.d/www.conf does include the line listen = /var/run/php5-fpm.sock It is possible (though I'm not sure) that it's a leftover of an older php version 5.5.3 which was installed and removed via apt-get. I'm running Ubuntu 13.10 (Saucy Salamander)

    Read the article

  • Connect to Nonencrypted Wireless Network Using Ubuntu Commands

    - by Tim
    I failed to connect to an open i.e. nonencrypted wireless network using Ubuntu command lines. Here is what I did: $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo /sbin/ifconfig wlan0 up $ sudo iwconfig wlan0 essid "Cavalier High-Speed 866-4-CAVTEL" $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 10812 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 21 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. Trying recorded lease 192.168.1.67 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Trying recorded lease 192.168.1.45 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms No working leases in persistent database - sleeping. $ sudo /sbin/iwconfig wlan0 wlan0 IEEE 802.11bg Mode:Managed Frequency:2.422 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I was wondering what the problem is and how I can do it right? Thanks and regards!

    Read the article

  • Home server - HP Proliant Microserver - Software and setup - OS on USB stick? [closed]

    - by Lloyd Watkin
    I've just purchased a HP ProLiant Microserver for home use. I want to set up with web server, samba shares, the usual stuff. My question is really about system setup. It has an internal USB socket so I've attempted to install a copy of Fedora 14 onto it. I turned off X/Gnome, but it still ran like a pig. I've now put the OS on one of the internal disks (250Gb, 7200rpm), but I was wondering if there was a way to utilise the internal USB to give me better power-saving allowing the hard drives to be shut down when not in use. How would you set this server up? I'd rather not go to the extra cost of an SSD right now, but if that's the best way then so be it.

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

< Previous Page | 504 505 506 507 508 509 510 511 512 513 514 515  | Next Page >