Search Results

Search found 26176 results on 1048 pages for 'stream socket client'.

Page 924/1048 | < Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • How do I get yum to see updates to a local repo without cleaning cache?

    - by Matt
    I have set up a local yum repository which I use to install test builds. For the testing purposes, my packages are versioned by <svn version number>.<date>.<time> (e.g. 12345.20110908.150404 The trouble is, once I make a new RPM, copy it to the repository directory and run createrepo $REPO_DIR, yum does not see the new RPM as being available. $ cd $REPO_DIR $ ls -1 repodata package-12345.20110908.150404-1.x86_64.rpm package-12345.20110908.174329-1.x86_64.rpm $ createrepo . # ...snip... $ rpm -q package package-12345.20110908.150404-1.x86_64 $ yum list --showduplicates package Installed Packages package.x86_64 12345.20110908.150404-1 @repo Available Packages package.x86_64 12345.20110908.150404-1 repo I can see the updates and grab them if I run yum clean all and then re-fetch the metadata, but I think this just means I need to be doing something else from the repo, as I don't have to do that for other yum repos. How do I need to set up my local repository so that I only need to run yum update from the client without having to clean my yum cache?

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • Possible for linux bridge to intercept traffic?

    - by A G
    I have a linux machine setup as a bridge between a client and a server; brctl addbr0 brctl addif br0 eth1 brctl addif br0 eth2 ifconfig eth1 0.0.0.0 ifconfig eth2 0.0.0.0 ip link set br0 up I also have an application listening on port 8080 of this machine. Is it possible to have traffic destined for port 80 to be passed to my application? I have done some research and it looks like it could be done using ebtables and iptables. Here is the rest of my setup: //set the ebtables to pass this traffic up to ip for processing; DROP on the broute table should do this ebtables -t broute -A BROUTING -p ipv4 --ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP //set iptables to forward this traffic to my app listening on port 8080 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 1/1 iptables -t mangle -A PREROUTING -p tcp -j MARK --set-mark 1/1 //once the flows are marked, have them delivered locally via loopback interface ip rule add fwmark 1/1 table 1 ip route add local 0.0.0.0/0 dev lo table 1 //enable ip packet forwarding echo 1 > /proc/sys/net/ipv4/ip_forward However nothing is coming into my application. Am I missing anything? My understanding is that the target DROP on the broute BROUTING chain will push it up to be processed by iptables. Secondly, are there any other alternatives I should investigate? Edit: IPtables gets it at nat PREROUTING, but it looks like it drops after that; the INPUT chain (in either mangle or filter) doesn't see the packet.

    Read the article

  • Linux Distro - GUI similar to Windows

    - by DeaconDesperado
    I am in the process of refurbing several older laptop machines for use by a couple college guys we have in training to learn basic web development in python. These are students who intern at my company and are hoping to do some work when the summer comes building simple client-oriented webapps (learning the basics of OOP, MVC webapp design in flask, etc.). We're trying to function as the "practical" side of their education. I would like to get them set up on these machines we have sitting about, but I'd like to use a linux distro that would have a gui that closely approximates what they are being compelled to use at school (windows.) I don't really have much of a preference as far as GUI goes since much of what we'll be learning together is accomplished on the command line. I just see this as an easier adjustment for them while they are still reliant on a graphical environment. In the past I'd go straight for Ubuntu, but since they started using the Unity GUI the responsiveness overall can be pretty clunky on older machines, especially since these machines (there are four of them) run the gambit on specs (though all are at least 1.0Ghz and none have anything better than basic integrated video.) Has anyone had to setup a similar working environment in Mint, bare Debian or Zorin? Thanks.

    Read the article

  • IIS 7 much slower than IIS 6

    - by JoeJoe
    I have a asp.net 3.5 web application running fine on Windows2003 IIS6. I published same exact application to IIS7.5 (Win2008R2) on a faster box (i5,8Gram) and it is significantly slower. 5-6 sec per page vs. 1-2 sec per page. During that time the Task Mgr CPU is always under 10%. Both attach to same database on other box. Benchmark is consistent from any other client browser or machine. I have connection pool on both, compression on both. Same network subnet. Forms authentication (no SSL yet). Can you give me steps on how to troubleshoot where the delays are being inserted or settings in IIS7 that I may have overlooked. Just using defaults. There is only 1 web site on each box. I understand the roles of an Application as defined in IIS has changed. There is no special Application defined in IIS.

    Read the article

  • iptables drops some packets on port 80 and i don't know the cause.

    - by Janning
    Hi, We are running a firewall with iptables on our Debian Lenny system. I show you only the relevant entries of our firewall. Chain INPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 state NEW Chain OUTPUT (policy DROP 0 packets, 0 bytes) target prot opt in out source destination ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED LOGDROP all -- * * 0.0.0.0/0 0.0.0.0/0 Some packets get dropped each day with log messages like this: Feb 5 15:11:02 host1 kernel: [104332.409003] dropped IN= OUT=eth0 SRC= DST= LEN=1420 TOS=0x00 PREC=0x00 TTL=64 ID=18576 DF PROTO=TCP SPT=80 DPT=59327 WINDOW=54 RES=0x00 ACK URGP=0 for privacy reasons I replaced IP Addresses with and This is no reason for any concern, but I just want to understand what's happening. The web server tries to send a packet to the client, but the firewall somehow came to the conclusion that this packet is "UNRELATED" to any prior traffic. I have set a kernel parameter ip_conntrack_ma to a high enough value to be sure to get all connections tracked by iptables state module: sysctl -w net.ipv4.netfilter.ip_conntrack_max=524288 What's funny about that is I get one connection drop every 20 minutes: 06:34:54 droppedIN= 06:52:10 droppedIN= 07:10:48 droppedIN= 07:30:55 droppedIN= 07:51:29 droppedIN= 08:10:47 droppedIN= 08:31:00 droppedIN= 08:50:52 droppedIN= 09:10:50 droppedIN= 09:30:52 droppedIN= 09:50:49 droppedIN= 10:11:00 droppedIN= 10:30:50 droppedIN= 10:50:56 droppedIN= 11:10:53 droppedIN= 11:31:00 droppedIN= 11:50:49 droppedIN= 12:10:49 droppedIN= 12:30:50 droppedIN= 12:50:51 droppedIN= 13:10:49 droppedIN= 13:30:57 droppedIN= 13:51:01 droppedIN= 14:11:12 droppedIN= 14:31:32 droppedIN= 14:50:59 droppedIN= 15:11:02 droppedIN= That's from today, but on other days it looks like this, too (sometimes the rate varies). What might be the reason? Any help is greatly appreciated. kind regards Janning

    Read the article

  • Router that allows custom Dynamic DNS server [closed]

    - by Thuy
    I've made my own DDNS service and it works fine using an application running on clients to update the IP. But if for some reason I don't have the choice of using my software and instead I need to use a router to update the IP, it becomes troublesome. For example, I needed to setup IPsec from a customer to me and the customers router/firewall (netgear srx5308) has a dynamic IP which is given from the ISP which can't offer static IPs. So it needs to use dynamic dns for it to work. In this case there really isn't a client to run the software on since it's a router/firewall. Unfortunately it seems that most routers are rather unfriendly towards custom DDNS solutions and most offer only dyndns.com or similar templates. Which was the case with this router too. Leaving me with no way to use my own dynamic dns server IP. I have the option of switching out the customers router and I've been looking around for alternatives and other routers/solutions and I was wondering if anyone on this great site might have been in a similar situation or might just know about some router/firewall that is more friendly towards custom ddns solutions that I might be able to use. Thanks in advance for any help or guidance!

    Read the article

  • nginx + php-fpm - where are my $_GET params?

    - by egis
    Hello everyone, I have a strange problem here. I just moved from apache + mod_php to nginx + php-fpm. Everything went fine except this one problem. I have a site, let's say example.com. When I access it like example.com?test=get_param $_SERVER['REQUEST_URI'] is /?test=get_param and there is a $_GET['test'] also. But when I access example.com/ajax/search/?search=get_param $_SERVER['REQUEST_URI'] is /ajax/search/?search=get_param yet there is no $_GET['search'] (there is no $_GET array at all). I'm using Kohana framework. which routes /ajax/search to controller, but I've put phpinfo() at index.php so I'm checking for $_GET variables before framework does anything (this means that disapearing get params aren't frameworks fault). My nginx.conf is like this worker_processes 4; pid logs/nginx.pid; events { worker_connections 1024; } http { index index.html index.php; autoindex on; autoindex_exact_size off; include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 128; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; error_log logs/error.log debug; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 2; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include sites-enabled/*; } and example.conf is like this server { listen 80; server_name www.example.com; rewrite ^ $scheme://example.com$request_uri? permanent; } server { listen 80; server_name example.com; root /var/www/example/; location ~ /\. { return 404; } location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } location ~* ^/(modules|application|system) { return 403; } # serve static files directly location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ { access_log off; expires 30d; } } fastcgi_params is like this fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_param QUERY_STRING $query_string; fastcgi_param PATH_INFO $fastcgi_path_info; What is the problem here? By the way there are few more sites on the same server, both Kohana based and plain php, that are working perfectly.

    Read the article

  • Open an X application going through many hoops (SSH, vpn etc)

    - by ??O?????
    The players: my home computer, running Linux with an X server running. (Call it HOME.) a remote site, to which I can connect over the internet using a VPN. (SITE) a Linux computer at the remote site, to which I can connect with ssh -X and nicely have X clients displaying on my local server. (MIDDLE) a very old Irix machine (an Onyx) at the remote site, which has no SSH server (therefore I can't ssh -X to it), only an ssh client. (ONYX) Purpose I need to run an X11 application on the ONYX machine, and see the GUI on HOME. I think I stumble upon xauth issues. So far The current situation is: ? HOME connects to SITE ? A vncserver starts on MIDDLE:7 ? vncviewer on HOME connects to vncserver on MIDDLE ? ONYX starts a forwarding ssh session to MIDDLE: ssh -TfN -L 6007:127.0.0.1:6007 MIDDLE ? DISPLAY=localhost:7 xclient on ONYX fails with Xlib: connection to "127.0.0.1:7.0" refused by server I do know that the forwarding (6007:127.0.0.1:6007) succeeds. A previous attempt was: ? HOME connects to SITE ? HOME connects to MIDDLE: ssh -X MIDDLE (xclock displays on HOME, DISPLAY is 127.0.0.1:10) ? ONYX starts an SSH tunnel to MIDDLE: ssh -TfN -L 6010:127.0.0.1:6010 MIDDLE ? DISPLAY=127.0.0.1:10 xclient fails with X connection to 127.0.0.1:10.0 broken (explicit kill or server shutdown). while an error pops up in the MIDDLE session: X11 connection rejected because of wrong authentication. Despair How can I achieve my purpose?

    Read the article

  • can't register a soft phone to asterisk11

    - by Tom
    I have a VM (on oracle vbox) running Fedora17. I've installed asterisk 11 on it from sources. I've followed the wiki for installation (https://wiki.asterisk.org/wiki/display/AST/Creating+SIP+Accounts) to the letter. The ip on the VM machine running fedora is 192.168.1.7 and I can ping it from the host machine (Ubuntu 12.04), which is at 192.168.1.2 I've tried registering with ekiga with the following settings: user: [email protected]. Password: verysecretpassword registar: 192.168.1.7 but I'm getting an error "transport fail". Also, while trying to register I'm logged in to the asterisk CLI with verbose level 3 and debug level 4 and nothing appears. some more relevant data: I've added the following code to the end of my sip.conf.sample file: [demo-alice] type=friend host=dynamic secret=verysecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 [demo-bob] type=friend host=dynamic secret=othersecretpassword context=users deny=0.0.0.0/0 permit=192.168.1.0/255.255.255.0 After I changed the sip.conf.sample file, I've created a copy of it and named it sip.conf. then I logged in to the asterisk CLI and typed sip reload. Then I'm trying to register and ekiga client from my host machine at 192.168.1.2 but it doesn't work and nothing appears on the asterisk CLI while in verbose mode level 3. BTW, If there is missing information about my question, please don't close it. comment about what you need to know and I'll edit it in to the question. tnx.

    Read the article

  • Help configuring Mercury mail or similiar with XAMPP to send e-mail outside of localhost

    - by user291040
    I'm building a PHP/MySQL driven website for my department at work (installed via XAMPP). I need to be able to send mail to outside e-mail addresses (e.g., Yahoo, Hotmail, etc.) using the PHP mail() function. As I see it I have to solutions: Configure the SMTP directive in php.ini to the server running at my work. Configure/run a mail server that can send e-mails outside of localhost (I'm trying Mercury because it comes installed with XAMPP). Here are problems I've come up against: I took a guess at our SMTP server name, and when calling PHP mail(), I get the error SMTP server response: 530 5.7.1 Client was not authenticated I can't be sure, however, the SMTP name is correct (I can't get help from our IT guys because of politics). I have tried to use mercury mail. Mercury seems to be picking up the request, but it doesn't want to forward the e-mail to the outside. I keep getting a Temporary error 240 (temporary MX resolution error). I've searched high and low but still can't find a definitive answer on how to send e-mails outside of localhost. Any help is greatly appreciated.

    Read the article

  • 2012 R2 services will not start after promotion to Domain Controller

    - by Cybersylum
    Having a peculiar issue promoting a Windows 2012 R2 server in a domain at 2003 domain/forest functional level. Built a new 2012 R2 server, added the following software (labtech, appassure, eset A/V, & Teamviewer). It activated and appeared to be working fine. I added the Active Directory Domain Services role, and completed the configuration (Domain/Forest Prep, and DC promotion). All appeared to go well. I rebooted the server, and that's where the peculiar stuff began. I noticed the server indicated it needed activated again; but would not accept the key. I verified the key was good. That's when I noticed the Software Protection service (as well as many other core services - Base Filtering engine, DHCP client, firewall, etc) would not start. The error message for all of them was "Access Denied". I called MS, and they wanted to troubleshoot at a service level. Their fix was to use procmon and identify the resource that needed permissions (registry key, file or folder) and add "everyone" with full control). That got the services to start; but the problem re-appeared after a reboot. Thinking the issue might have been with the anti-virus package during the promotion process, I rebuilt the DCs from scratch and removed the metadata from AD (as I could not demote the machines "rpc server unavailble"). I tried to promote the newly built machines again. The only changes to the brand new machines being critical updates. Again the promotion appeared to work fine; but upon reboot (and a long wait to allow replication to occur) similar problems began to re-appear. I have verified that the schema updates are correct (schema version is 69 - for Windows 2012 R2). I am not finding much about this issue through my own searches, so I thought I would post this to see if anyone else has seen anything similar...

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • SBS DC DNS entries going missing?

    - by Chris W
    I've been looking at a problem on a friends SBS (2003) server where the client PC's aren't able to connect to the server with a variety of errors reported. Checking the server itself the only indicator of an issue is an error 5782: Dynamic registration or deregistration of one or more DNS records failed with the following error: No DNS servers configured for the local system. Running a dcdiag reports that there are no DNS records registered for the DC so I fixed the problem by doing a netdiag /fix after which the dcdiag comes back clean and clients are ok again. It happened a few weeks ago as well and the same fix solved it. What are the possible causes of the DC DNS entries going missing? Is this a config option that needs tweaking or could it be solved by something simple like scheduling the SBS server to re-boot periodically? The only change they can think of that was made near to the time of the first instance of this problem occurring is that RRAS was started up to allow for a VPN connection from a home user. NB - The server is setup with a pair of NICs in a team so the server has a single virtual NIC providing both LAN/WAN connections to it. An external hardware firewall is in use rather than the windows firewall.

    Read the article

  • Noisily rendered text in Firefox

    - by Notinlist
    It came to me in a week or so that certain pages (Facebook, StrackOverflow, some news sites) have text rendering errors in Firefox. As a workaround if I refresh the page, or simply select and deselect the buggy text, the unpleasant effect disappears. I don't have this effect in Internet Explorer or in any of my desktop applications. Windows 7 Pro 64bit (fresh) Firefox 19.0.2 (fresh) Ati Radeon HD 4600 Series (fresh drivers) Thanks for the help in advance! Update 1/2 I have only three addons: Forecastfox, Hungarian spell checking dictionary and Quick locale switcher. The latter two are installed after the effect appeared. I disabled the first individually and did not helped. But if I start my firefox with disabled addons I cannot reproduce the error. As far as I know this mode does not mean disabled plugins, which I do have (Adobe Acrobat, Citrix ICA Client, Google Earth plugin, Google update, Java Deployment Toolkit 6, MS Office 2010, MS Windows Media Player Firefox, Shockwave Flash, Silverlight, VLC Web). Update 2/2 If I disable all plugins and extensions I still have the problem. If I start Firefox with disabled addons then I cannot reproduce the problem.

    Read the article

  • Remote offscreen rendering

    - by redmoskito
    My research lab recently added a server that has a beefy NVIDIA graphics card, which we would like to use to do scientific computations. Since it isn't a workstation, we'll have to run our jobs remotely, over an ssh connection. Most of our applications require doing opengl rendering to an offscreen buffer, then doing image analysis on the result in CUDA. My initial investigation suggests that X11 forwarding is a bad idea, because opengl rendering will occur on the client machine (or rather the X11 server--what a confusing naming convention!) and will suffer network bottlenecks when sending our massive textures. We will never need to display the output, so it seems like X11 forwarding shouldn't be necessary, but Opengl needs the $DISPLAY to be set to something valid or our applications won't run. I'm sure render farms exist that do this, but how is it accomplished? I think this is probably a simple X11 configuration issue, but I'm too unfamiliar with it to know where to start. We're running Ubuntu server 10.04, with no gdm, gnome, etc installed. However, xserver-xorg package is installed.

    Read the article

  • wireless to wired internet

    - by Mark T
    My wife uses an old computer which doesn't really like having an internet connection via a USB wireless adapter. I've tried several, and none have been satisfactory. The connection gets dropped often and it is frustrating for her. A wireless card also had the same trouble. Repositioning the computer made no difference. (Two laptops, nearby, connect just fine and stay up, so it isn't the router.) However, the computer always worked well when I had an Ethernet cable connected to it. I know there is a box which will connect to my wireless network and provide an Ethernet cable connection. But the terminology used is so complex, I can't tell what it is I really need. In case that wasn't clear, here it is in different words: What I need is just the opposite of a wireless router. My wireless router takes my cable modem's Ethernet connection and makes it available to wireless clients. What I want is a box which is a wireless client to my wireless router and provides an Ethernet cable connection that I can connect to any device. I need to know the right name for a box with such capabilities. If you know of some inexpensive examples, that would also be helpful. I'm running a wireless G network with a Linksys Router.

    Read the article

  • Have to run auto-negotiate between clients and switch - "old" switch works fine - "new" switch results in "port flapping"?

    - by ConfusedAboutSwitching
    I need some help understanding a problem we're having at work: We run Altiris/Deployment Solution and have to use auto-negotiate between client systems and our switches (Altiris apparently requires this for imaging, PXE boot and other functions). We have several areas with old wiring (Cat 3 & Cat 5) that have old 10/100 Cisco switches in them - and we can set these systems up to "auto/auto" (auto-negotiate on both the NIC and the switch port), and everything has been working fine. But - our networking crew changed out a couple of old switches for 10/100/1000 Cisco switches, and now - they are claiming that "auto/auto" won't work because the switches can't auto-negotiate the way the old 10/100 switches did - and that if we try to set the new gig switches to auto-negotiate, the switch port starts "port flapping", and shuts the port down. But - if we put the old switch back in - they work using "auto/auto" just fine - no port flapping. The networking crew is telling me that the problem is that we're putting "new switches" on "old wire", and that the old cabling can't/won't support the auto-negotiation with these new switches....??? There's something about this that doesn't make sense to me - can someone explain this to me? Or is our networking crew just doing something wrong in the configuration of these new switches? While will the old switches work "auto/auto", but the new switches won't?? HELP!!....and Thanks!! M

    Read the article

  • PXELinux and compressed kernels/images

    - by Yvan JANSSENS
    Is it possible to boot compressed kernels with a compressed initrd with PXELinux? First, a little background: We created a custom Linux distro, for diskless OpenCL computing nodes. We want those nodes to fetch their OS from the network. Our Distro is composed out of a kernel (duh) and a large initrd which is loaded into RAM and everything is executed from there. We chose to run everything off the initrd for two reasons: NFS was not an option to serve the filesystem's extra contents Fast file access from RAM. No persistent storage needed, data and config is pulled dynamically through a SOAP service. Now our initrd is about 450M in size. At our network speeds, it takes about two to three minutes to load a single client. Will compression speed up te downloading, and if yes, which one should be used? Is LZMA supported by PXELinux, or do we need to stick to bzip2 or gzip? Because of the 2-3 minutes loading time, booting 15 nodes over the same network link takes quite a lot of time. We decided not to use hard drives or CD/DVD drives, for financial reasons (cheapest HDD @ €30 times 15 is a lot of money saved ;-) ) So, our question is: what compression options are available for this setup? And how do we do this? Thank you for your time! Yvan Janssens

    Read the article

  • How to access Virtual machine using powershell script

    - by Sheetal
    I want to access the virtual machine using powershell script. For that I used below script, Enter-PSSession -computername sheetal-VDD -credential compose04.com\abc.xyz1 where, sheetal-VDD is hostname of virtual machine compose04.com is the domain name of virtual machine and abc.xyz1 is the username of virtual machine After entering above command , it asks for password. When the password is entered I get below error, Enter-PSSession : Connecting to remote server failed with the following error message : WinRM cannot process the reques t. The following error occured while using Kerberos authentication: There are currently no logon servers available to s ervice the logon request. Possible causes are: -The user name or password specified are invalid. -Kerberos is used when no authentication method and no user name are specified. -Kerberos accepts domain user names, but not local user names. -The Service Principal Name (SPN) for the remote computer name and port does not exist. -The client and remote computers are in different domains and there is no trust between the two domains. After checking for the above issues, try the following: -Check the Event Viewer for events related to authentication. -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or us e HTTPS transport. Note that computers in the TrustedHosts list might not be authenticated. -For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic. At line:1 char:16 + Enter-PSSession <<<< -computername sheetal-VDD -credential compose04.com\Sheetal.Varpe + CategoryInfo : InvalidArgument: (sheetal-VDD:String) [Enter-PSSession], PSRemotingTransportException + FullyQualifiedErrorId : CreateRemoteRunspaceFailed Can someone help me out in this?

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Centos VM serving multiple public IP: how to configure network interface?

    - by Glasnhost
    I have a Centos 5.6 VM (Vsphere client) already responding to two different public IPs on eth0 and eth0:1 and I'm trying to add eth0:2. I copied the eth0 config file and restarted the network service. I don't understand which other steps are needed... ifconfig eth0 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.10 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:163371837 errors:77 dropped:0 overruns:0 frame:0 TX packets:168210961 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1891221045 (1.7 GiB) TX bytes:855899500 (816.2 MiB) Interrupt:59 Base address:0x2000 eth0:1 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.11 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 eth0:2 Link encap:Ethernet HWaddr 00:40:46:B9:00:41 inet addr:10.1.12.12 Bcast:10.1.12.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:59 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:188976973 errors:0 dropped:0 overruns:0 frame:0 TX packets:188976973 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2015642664 (1.8 GiB) TX bytes:2015642664 (1.8 GiB) more /etc/resolv.conf nameserver 10.1.12.1 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.1.12.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.1.12.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Host is missing hostname and/or domain

    - by anlawang
    i use puppet 0.25.4 on ubuntu 10.04,when puppet installed ,i got the infor below : Nov 29 10:30:30 puppet puppetmasterd[4422]: Host is missing hostname and/or domain: pclient.example.com Nov 29 10:30:30 puppet puppetmasterd[4422]: Compiled catalog for pclient.example.com in 0.02 seconds i dont know how to fix it ,who can help me thank you ! my configuration : I use apt-get to install the puppet,so some configuration have been fixed puppet.conf on client : > [main] server=puppet.example.com > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=false > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post > certname=pclient.example.com > node_name=cert [puppetd] > runinterval=30 puppet.conf on server: > [main] logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=true > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post i user the default node on site.pp i am a newer to puppet,so i dont know the reason for these problems!! thank you again!!!

    Read the article

< Previous Page | 920 921 922 923 924 925 926 927 928 929 930 931  | Next Page >