Search Results

Search found 16899 results on 676 pages for 'local'.

Page 620/676 | < Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

  • Primary/secondary ethernet interfaces via NetworkManager in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local... EDIT: Here's my routing table: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 172.16.151.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 172.16.30.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.0.1 0.0.0.0 UG 0 0 0 eth1 I want the 172.16.30.2 network to be the primary one and the 10.1.0.0 network to be the secondary one. EDIT2: My nameservers are also incorrect. It seems like Ubuntu is bringing the networks up in order, eth0, then 1, then 2, and the DHCP information from eth1 is overriding eth0, and eth2 is overriding eth1. How can I reverse this so the DHCP information from eth0 is the "master"? EDIT3: This seems to be an issue with Gnome's NetworkManager.

    Read the article

  • Nginx Reverse Proxy Node.js and Wordpress + Static Files Issue

    - by joemccann
    I have had quite a time trying to get nginx to serve static assets from my wordpress blog. Have a look at the config and let me know if you can help. ( https://gist.github.com/1130332 - to see the entire thing) server { listen 80; server_name subprint.com; access_log /var/www/subprint/logs/access.log; error_log /var/www/subprint/logs/error.log; root /var/www/subprint/server/public; # express serves static resources for subprint.com out of here location / { proxy_pass http://127.0.0.1:8124; root /var/www/subprint/server; access_log on; } #serve static assets location ~* ^(?!\/).+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$ { expires max; access_log off; } # the route for the wordpress blog # unfortunately the static assets (css, img, etc.) are not being pathed/served properly location /blog { root /var/www/localhost/public; index index.php; access_log /var/www/localhost/logs/access.log; error_log /var/www/localhost/logs/error.log; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } if (!-f $request_filename) { rewrite /blog$ /blog/index.php last; break; } } # actually serves the wordpress and subsequently phpmyadmin location ~* (?!\/blog).+\.php$ { fastcgi_pass localhost:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/localhost/public$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include /usr/local/nginx/conf/fastcgi_params; } # This works fine, but ONLY with a symlink inside the /var/www/localhost/public directory pointing to /usr/share/phpmyadmin location /phpmyadmin { index index.php; access_log /var/www/phpmyadmin/logs/access.log; error_log /var/www/phpmyadmin/logs/error.log; alias /usr/share/phpmyadmin/; if (!-f $request_filename) { rewrite /phpmyadmin$ /phpmyadmin/index.php permanent; break; } } # opt-in to the future add_header "X-UA-Compatible" "IE=Edge,chrome=1"; }

    Read the article

  • how to use iptables to block the IP of device connected to openwrt router

    - by scola
    I have two routers(A,B).the A connect to internet with IP:192.168.1.1 The openwrt router B connect the lan of A by bridge with static IP:192.168.1.111. I am learning to use iptables to control the devices connected to B(wlan) . I use my phone to connect wifi of B,the phone's IP is IP:192.168.1.100.it can surf the internet normally. I want to block the phone's IP to make the phone can not connect to internet. refer to http://bredsaal.dk/some-small-iptables-on-openwrt-tips iptables -A input_wan -s 192.168.1.100 --jump REJECT iptables -A forwarding_rule -d 192.168.1.100 --jump REJECT but it do not work.the phone still connect to internet normally. and I tried other chain(INPUT,OUTPUT,FORWARD).so many chains confused me. iptables -I OUTPUT -o br-lan -s 192.168.1.100 -j DROP and it do not work again. I'm sure that the iptables have no problem. root@OpenWrt:/etc# iptables -L|grep Chain Chain INPUT (policy ACCEPT) Chain FORWARD (policy DROP) Chain OUTPUT (policy ACCEPT) Chain forward (1 references) Chain forwarding_lan (1 references) Chain forwarding_rule (1 references) Chain forwarding_wan (1 references) Chain input (1 references) Chain input_lan (1 references) Chain input_rule (1 references) Chain input_wan (1 references) Chain output (1 references) root@OpenWrt:/etc# ifconfig br-lan Link encap:Ethernet HWaddr 0C:82:68:97:57:BA inet addr:192.168.1.111 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::e82:68ff:fe97:57ba/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14976 errors:0 dropped:0 overruns:0 frame:0 TX packets:7656 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2851980 (2.7 MiB) TX bytes:1902785 (1.8 MiB) eth0 Link encap:Ethernet HWaddr 0C:82:68:97:57:BA UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:58201 errors:0 dropped:11 overruns:0 frame:0 TX packets:45012 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:54591348 (52.0 MiB) TX bytes:5711142 (5.4 MiB) Interrupt:4 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:312 errors:0 dropped:0 overruns:0 frame:0 TX packets:312 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:39961 (39.0 KiB) TX bytes:39961 (39.0 KiB) mon.wlan0 Link encap:UNSPEC HWaddr 0C-82-68-97-57-BA-00-48-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4900 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1223807 (1.1 MiB) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 0C:82:68:97:57:BA UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37346 errors:0 dropped:0 overruns:0 frame:0 TX packets:49662 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:3808021 (3.6 MiB) TX bytes:54486310 (51.9 MiB) root@OpenWrt:/etc/config# cat network config 'interface' 'loopback' option 'ifname' 'lo' option 'proto' 'static' option 'ipaddr' '127.0.0.1' option 'netmask' '255.0.0.0' config 'interface' 'lan' option 'ifname' 'eth0' option 'type' 'bridge' option 'proto' 'static' option 'ipaddr' '192.168.1.111' option 'netmask' '255.255.255.0' option 'gateway' '192.168.1.1' option dns 192.168.1.1 and how to use iptables to control the network of wlan? Thanks in advance and sorry for poor English.

    Read the article

  • IPv6 routing to another interface

    - by Robert
    I'm trying to get an IPv6 enabled router to forward data from one interface to the other and I'm having issues. When following this example (http://www.cisco.com/en/US/tech/tk872/technologies_configuration_example09186a0080ba6106.shtml) I am able to get full connectivity between all 3 routers in my simulator. However when I try to use only 1 router; I can't get connectivity to the other interfacs on the same router. My PC is directly attached to FA 0/1 and it can ping the router's interface. However it can not ping any other interface on the router(which unless I'm missing something it should be able to do). The router on the other hand can ping everything. I thought static routes might help; but the router already has routes for everything. I'm thinking the packet should come in; router looks up the destination in it's ipv6 routing table and then realizes it's for itself, and should respond. I thought maybe it couldn't respond directly; so I tried pinging a device like 2001:0000:0000:1000::2, but i don't get a response. I'm running on IOS 12.4. I'm missing something(hopefully simple), but I just can't see what it is. With only 1 router; how do I enable my PC to talk to the other subnets? Thank you in advance, Robert Topology: R1 FA 0/0: 2001:0000:0000:0000::1/52 FA 0/1: 2001:0000:0000:1000::1/52 FA 1/0: 2001:0000:0000:2000::1/52 Loopback 0: 2001:0000:0000:3000::1/52 PC: 2001:0000:0000:2000::2/52 PC plugs directly into FA 1/0 on the router. --- Configuration --- ipv6 cef ipv6 unicast routing interface Loopback0 no ip address ipv6 address 2001:0000:0000:3000::1/52 ipv6 enable ! interface FastEthernet0/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000::1/52 ipv6 enable ! interface FastEthernet0/1 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:1000::1/52 ipv6 enable ! interface FastEthernet1/0 no ip address duplex auto speed auto ipv6 address 2001:0000:0000:2000::1/52 ipv6 enable --- end of config --- --- routing table --- IPV6Lab#show ipv6 route IPv6 Routing Table - 10 entries Codes: C - Connected, L - Local, S - Static, R - RIP, B - BGP U - Per-user Static route I1 - ISIS L1, I2 - ISIS L2, IA - ISIS interarea, IS - ISIS summary O - OSPF intra, OI - OSPF inter, OE1 - OSPF ext 1, OE2 - OSPF ext 2 ON1 - OSPF NSSA ext 1, ON2 - OSPF NSSA ext 2 C 2001:0000:0000::/52 [0/0] via ::, FastEthernet0/0 L 2001:0000:0000::1/128 [0/0] via ::, FastEthernet0/0 C 2001:0000:0000:1000::/52 [0/0] via ::, FastEthernet0/1 L 2001:0000:0000:1000::1/128 [0/0] via ::, FastEthernet0/1 C 2001:0000:0000:2000::/52 [0/0] via ::, FastEthernet1/0 L 2001:0000:0000:2000::1/128 [0/0] via ::, FastEthernet1/0 C 2001:0000:0000:3000::/52 [0/0] via ::, Loopback0 L 2001:0000:0000:3000::1/128 [0/0] via ::, Loopback0 L FE80::/10 [0/0] via ::, Null0 L FF00::/8 [0/0] via ::, Null0 --- end of routing table ---

    Read the article

  • Creating a really public Windows network share

    - by Timur Aydin
    I want to create a shared folder under Windows (actually, Windows XP, Vista, and Win 7) which can be mounted from a linux system without prompting for a username/password. But before attempting this, I first wanted to establish that this works between two Windows 7 machines. So, on machine A (The server that will hold the public share), I created a folder and set its permissions such that Everyone has read/write access. Then I visited Control Panel - Network and Sharing Center - Advanced Sharing Settings and then selected "Turn off password protected sharing". Then, on machine B (The client that wants to access the public share with no username/password prompt), I tried to "map network driver" and I was immediately prompted by a password prompt. Some search on google suggested changing "Acconts: Limit local account use of blank passwords to console logon only" to "Disabled". Tried that, no luck, still getting username/password prompt. If I enter the username/password, I am not prompted for it again and can use the share as long as the session is active. But still, I really need to access the share without any username/password transaction whatsoever and this is not just a convenience related thing. Here is the actual reason: The device that will access this windows network share is an embedded system running uclinux. It will mount this share locally and then play media files. Its only user interface is a javascript based web page. So, if there is going to be any username/password transaction, I would have to ask the user to enter them over the web page, which will be ridiculously insecure and completely exposed to packet sniffing. After hours of doing experiments, I have found one way to make this happen, but I am not really very fond of it... I first create a new user (shareuser) and give it a password (sharepass). Then I open Group Policy Editor and set "Deny log on locally" to "A\shareuser". Then, I create a folder on A and share it so that shareuser has Read access to it. This way, shareuser cannot login to A, but can access the shared folder. And, if someone discovers the shareuser/sharepass through network sniffing, they can just access the shared folder, but can't logon to A. The same thing can be achieved by enabling the Guest user and then going to Group Policy Editor and deleting the "Guest" from the "Deny access to this computer from the network" setting. Again, Guest can mount the public share, but logging in to A as Guest won't be possible, because Guest is already not allowed to log in by default. So my question would be, how can I create a network share that is truly public, so that it can be mounted from a linux machine without requiring a password? Sorry for the long question, but I wanted to explain the reason for really needing this...

    Read the article

  • A proper way to create non-interactive accounts?

    - by AndreyT
    In order to use password-protected file sharing in a basic home network I want to create a number of non-interactive user accounts on a Windows 8 Pro machine in addition to the existing set of interactive accounts. The users that corresponds to those extra accounts will not use this machine interactively, so I don't want their accounts to be available for logon and I don't want their names to appear on welcome screen. In older versions of Windows Pro (up to Windows 7) I did this by first creating the accounts as members of "Users" group, and then including them into "Deny logon locally" list in Local Security Policy settings. This always had the desired effect. However, my question is whether this is the right/best way to do it. The reason I'm asking is that even though this method works in Windows 8 Pro as well, it has one little quirk: interactive users from "User" group are still able to see these extra user names when they go to the Metro screen and hit their own user name in the top-right corner (i.e. open "Sign out/Lock" menu). The command list that drops out contains "Sign out" and "Lock" commands as well as the names of other users (for "switch user" functionality). For some reason that list includes the extra users from "Deny logon locally" list. It is interesting to note that this happens when the current user belongs to "Users" group, but it does not happen when the current user is from "Administrators". For example, let's say I have three accounts on the machine: "Administrator" (from "Administrators", can logon locally), "A" (from "Users", can logon locally), "B" (from "Users", denied logon locally). When "Administrator" is logged in, he can only see user "A" listed in his Metro "Sign out/Lock" menu, i.e. all works as it should. But when user "A" is logged in, he can see both "Administrator" and user "B" in his "Sign out/Lock" menu. Expectedly, in the above example trying to switch from user "A" to user "B" by hitting "B" in the menu does not work: Windows jumps to welcome screen that lists only "Administrator" and "A". Anyway, on the surface this appears to be an interface-level bug in Windows 8. However, I'm wondering if going through "Deny logon locally" setting is the right way to do it in Windows 8. Is there any other way to create a hidden non-interactive user account?

    Read the article

  • Port forwarding not working properly

    - by sudo work
    I'm trying to host a small web server from my home network; however, I have not been able to successfully port forward ports to the local server. My current network topology looks like this: Cable Modem/Router - Secondary Wireless Router - Many computers (including server) The modem/router I'm using is a Cisco (Scientific Atlantic) DPC2100, provided by my ISP. The wireless router that I'm using as the central hub to my home network is a Linksys E3000. The computer being used as a server is running Ubuntu 10.04 Server Edition. The main issue is that I can't access the server remotely, using my WAN IP address. I have port forwarded my wireless router; however, I believe that I need to somehow set my modem to bridge mode. As far as I can tell though, this isn't possible. Here are the various IP address settings: DPC2100 WAN: 69.xxx.xxx.xxx Internal IP: 192.168.100.1 Internal Network: 192.168.7.0 E3000 IP Address: 192.168.7.2 Gateway: 192.168.7.1 Internal IP: 192.168.1.1 Internal Network: 192.168.1.0 Server IP Address: 192.168.1.123 Gateway: 192.168.1.1 Now I can do an nmap at various nodes, and here are the results (from the server): nmap localhost: 22,25,53,80,110,139,143,445,631,993,995,3306,5432,8080 open nmap 192.168.7.2: 22,25,80 (filtered),110,139,445 open (ports I have forwarded in the E3000)* nmap 69.xxx.xxx.xxx: 1720 open *For some reason, I can SSH into the server at 192.168.7.2, but not view the website. Here are also some other settings: /etc/hosts/ 127.0.0.1 localhost 127.0.1.1 servername ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/apache2/sites-available/default snippet <VirtualHost *:80> DocumentRoot /srv/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> ... </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> ... </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> ... </Directory> </VirtualHost> Let me know if you need any other information; some stuff probably slipped my mind.

    Read the article

  • Localhost has just stopped working (using xampp)

    - by Joe Taylor
    I installed Xampp to use for local development of a Drupal site. Its been working fine out of the box until now. The main Xampp localhost welcome menu loads, however my subdirectory (localhost/drupal) doesn't. It just spins in the browser for ages and nothing happens. Just a blank screen. I've tried the edit people suggest in the hosts file but that hasn't work and I'm getting no errors so not sure what to do. Anyone have any ideas what might be wrong? PS I'm running Windows 7 edit: Log files: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 [Tue Nov 05 20:52:07.242454 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.331459 2013] [core:warn] [pid 8432:tid 260] AH00098: pid file C:/xampp/apache/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? [Tue Nov 05 20:52:07.820487 2013] [ssl:warn] [pid 8432:tid 260] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00455: Apache/2.4.4 (Win32) OpenSSL/0.9.8y PHP/5.4.16 configured -- resuming normal operations [Tue Nov 05 20:52:07.898492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00456: Server built: Feb 23 2013 13:07:34 [Tue Nov 05 20:52:07.898492 2013] [core:notice] [pid 8432:tid 260] AH00094: Command line: 'c:\xampp\apache\bin\httpd.exe -d C:/xampp/apache' [Tue Nov 05 20:52:07.905492 2013] [mpm_winnt:notice] [pid 8432:tid 260] AH00418: Parent: Created child process 7588 [Tue Nov 05 20:52:08.882548 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.467582 2013] [ssl:warn] [pid 7588:tid 272] AH01909: RSA certificate configured for www.example.com:443 does NOT include an ID which matches the server name [Tue Nov 05 20:52:09.534585 2013] [mpm_winnt:notice] [pid 7588:tid 272] AH00354: Child: Starting 150 worker threads. Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41 Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 123731968 bytes) in C:\xampp\apps\drupal\htdocs\sites\all\themes\directory\node--job.tpl.php on line 41

    Read the article

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • can't send with postfix but I can whith one user

    - by CvR_XX
    I have a postfix and dovecot server but when i try to send an email i get an time -out. Im trying to send with the email [email protected]. A telnet session isn't helping much ether. I get a blank screen. Local it's working fine. My smtp service is running on treadity.com:25. The strange thing is that the logs are completely empty with any info regarding sending emails. Receiving is working alright. Another strange thing is that i've send some message's and that it worked. But that is only with one email. I can still send from that account but other emails are failing any idea's? config file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key #smtpd_use_tls=yes #smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache #smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_cert_file=/etc/ssl/certs/dovecot.pem smtpd_tls_key_file=/etc/ssl/private/dovecot.pem smtpd_use_tls=yes # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key #smtpd_use_tls=yes #smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache #smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_cert_file=/etc/ssl/certs/dovecot.pem smtpd_tls_key_file=/etc/ssl/private/dovecot.pem smtpd_use_tls=yes # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters #smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem #smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key #smtpd_use_tls=yes #smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache #smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_tls_cert_file=/etc/ssl/certs/dovecot.pem smtpd_tls_key_file=/etc/ssl/private/dovecot.pem smtpd_use_tls=yes smtpd_tls_auth_only = yes #Enabling SMTP for authenticated users, and handing off authentication to Dovecot smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smtpd_sasl_auth_enable = yes 1,1 Top

    Read the article

  • SSH Connection Refused - Debug using Recovery Console

    - by olrehm
    Hey everyone, I have found a ton of questions answered about debugging why one cannot connect via SSH, but they all seem to require that you can still access the system - or say that without that nothing can be done. In my case, I cannot access the system directly, but I do have access to the filesystem using a recovery console. So this is the situation: My provider made some kernel update today and in the process also rebooted my server. For some reason, I cannot connect via SSH anymore, but instead get a ssh: connect to host mydomain.de port 22: Connection refused I do not know whether sshd is just not running, or whether something (e.g. iptables) blocks my ssh connection attempts. I looked at the logfiles, none of the files in /var/log contain any mentioning on ssh, and /var/log/auth.log is empty. Before the kernel update, I could log in just fine and used certificates so that I would not need a password everytime I connect from my local machine. What I tried so far: I looked in /etc/rc*.d/ for a link to the /etc/init.d/ssh script and found none. So I am expecting that sshd is not started properly on boot. Since I cannot run any programs in my system, I cannot use update-rc to change this. I tried to make a link manually using ln -s /etc/init.d/ssh /etc/rc6.d/K09sshd and restarted the server - this did not fix the problem. I do not know wether it is at all possible to do it like this and whether it is correct to create it in rc6.d and whether the K09 is correct. I just copied that from apache. I also tried to change my /etc/iptables.rules file to allow everything: # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *mangle :PREROUTING ACCEPT [7468813:1758703692] :INPUT ACCEPT [7468810:1758703548] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] :POSTROUTING ACCEPT [7935933:3682829570] COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *filter :INPUT ACCEPT [7339662:1665166559] :FORWARD ACCEPT [3:144] :OUTPUT ACCEPT [7935930:3682829426] -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 -A INPUT -j ACCEPT -A FORWARD -j ACCEPT -A OUTPUT -j ACCEPT COMMIT # Completed on Thu Dec 10 18:05:32 2009 # Generated by iptables-save v1.4.0 on Thu Dec 10 18:05:32 2009 *nat :PREROUTING ACCEPT [101662:5379853] :POSTROUTING ACCEPT [393275:25394346] :OUTPUT ACCEPT [393273:25394250] COMMIT # Completed on Thu Dec 10 18:05:32 2009 I am not sure this is done correctly or has any effect at all. I also did not find any mentioning of iptables in any file in /var/log. So what else can I do? Thank you for your help.

    Read the article

  • Random Computer Crashes

    - by Josh W.
    Ok, here's a wierd one for you all. Occasionally my PC here at work will crash in a very peculiar way. My dual monitors will suddenly go blank as if there is no longer a video signal, the USB mouse light will go dark and mouse stays unresponsive, the keyboard lights will not change status when the appropriate keys are pressed (Num/Caps/Scoll Lock). The CD Tray WILL open and close. But the computer will not respond to a ping request. For all intents & purposes it's as if the computer is off, except it wasn't intentional by me. The power light and internal fans are still on and I've now lost any unsaved work. Now here's where it gets wierd. This PC is part of a batch of PC's we got from a local vendor who does our initial system builds. Mine, and 6 other co-workers PCs all have the same issue. Originally we thought it was a bad combination of hardware, but through trial and error the only thing we haven't eliminated are the OS, Mobo & CPU. The problem was so bad for some of them that they ended up going back to their 5 year old dinosaurs in order to get some work done, for me the problem isn't as bad, maybe once every other day or so, but still enough to bite me in the ass if I've forgotten my ritualistic pressing of CTRL-S every 1-5 minutes. In this case we've tried two different video cards, two different power supplies, two different memory configurations, running on a UPS/not on UPS, updating/rolling back video drivers, three different bios revisions. The only things we haven't swapped are the mobo & cpu, mainly because a new mobo means a new Hardware Abstraction Layer, ie re-install of windows and there's alot of other software on this PC that takes forever to reload by hand. There was a base image that our systems team created with all the drivers installed and the basic setup of software our company uses, but they then must customize the setup for us programmers so it takes a while to get a new configuration up and going. I'm a programmer by day and am usually pretty good at diagnosing computer problems whether through trial and error or not. We've pretty much exhausted all the ideas we can think of here, short of a new mobo/cpu. Was hoping someone out there might have anything else we can try.. Relevant Parts: OS: XP Pro 32-bit Motherboard: Intel DG41RQ CPU: Intel Core-2 Quad Q9400 @ 2.66GHz Current BIOS Version/Date Intel Corp. RQG4110H.86A.0014.2010.0306.1151, 3/6/2010 Dual LCD's, Viewsonic VG930m & Samsung SyncMaster 910v (other people have different models, but listed in case there's some very wierd problem with the signals being sent/received) PS/2 Keyboard USB Microsoft Intellimouse BIOS Versions Tried: R 0013 12/23/2009 R 0014 3/6/2010 Video cards Tried GeForce 8400 GS Radeon HD 4350 - ASUS EAH4350 Two Different Power Supplies a 380W & 550W Ram Configurations 2GB - 1 x 2GB 4GB - 2 x 2GB

    Read the article

  • Too many Tunnel Adapter Interfaces

    - by Tomas Lycken
    If I open a command prompt on my machine and type ipconfig /all, I see lots of Tunnel adapter Local Area Connection* 9: Media state . . . . . . . . . . . . . : Media disconnected Connection-specific DNS Sufficx . . . : Description . . . . . . . . . . . . . : Microsoft 6to4 Adapter #5 Physical address. . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP Enabled. . . . . . . . . . . . . : No Autoconfiguration Enabled . . . . . . : Yes In fact, they're so many that my "real" adapters are pushed out of the stack, and can't be seen anymore. Is there any flag I can use on ipconfig to hide all virtual interfaces? Or is there some other way around this problem? Since they always say "Media disconnected" I suppose disabling could be an option, but if possible I'd rather not turn any functionality off. I just want to control what output I get from ipconfig. Also, I know these are related to IPv6 stuff. However, most of what I find on google merely states what these are, and that they're harmless - nothing about hiding/removing them.

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Exchange-Server Query

    - by Rudi Kershaw
    First, a little background. I've recently been taken on as a web and software developer for a small company, who has no other in-house IT support. They've been asking my opinion on lots of IT subjects that are quite far out of my comfort zone. I'm definitely not a network admin. Their IT consultancy contractor is pushing them to upgrade their dedicated exchange server, even though it seems like the one they currently have has a lot of life left in it and is running problem free. They say it's "coming to the natural end of it's life". They want to install a monster with a Xeon E5-2420, 32GB RAM, 2x 1TB HDDs, Windows Server 2012 and Microsoft Exchange 2010. They want to charge a small fortune for it. Basically, this system seems massively over the top seeing as it won't be doing anything else other than running as an exchange server for a company with less than 25 email accounts. My employers also have a file server system in-house that hosts three web apps, an SQL server, their local domain, print server and shared folders. That machine is using the same specs as the proposed new one, and it is barely using any of it's potential. I asked if Microsoft Exchange 2010 could be installed on their file server, but they said that MS Exchange can't run on the same system as an SQL server because for some reason they will eat up each others resources (even though the SQL server isn't touching 1% of the current system's CPU or RAM). My question is really, are they trying to rip my employers off? Could MS Exchange be installed on their other server (on a virtual instance or not), or does the old one even need replacing at all? Going with their current suggestion will cost the company in excess of £6k, and it seems entirely unnecessary. I apologies, because I know this is probably a little thin on details, but if I carry on I could end up writing a massive essay that no-one will want to read. I've been doing my research, but I'm not knowledgeable enough make any hard decisions. Let me know if you need any more details. Thank you for any help you can offer. Further Details: The new exchange would need to support Outlook Web App, 25 users, a few public mailboxes, and email exchange with Blackberries.

    Read the article

  • I can't connect to mysql on a remote server

    - by eisaacson
    I'm trying to connect from an Ubuntu server to a RHEL6 server using mysql. I've tried telneting into the server as well as trying to connect with mysql. I've tried commenting out the bind-address but didn't have any success with that either. I don't get an error code or anything with telnet. It just fails after a minute or so. With mysql, I get this error code ERROR 2003 (HY000): Can't connect to MySQL server on 'SERVER_IP' (111). "SERVER_IP" is of course a placeholder where actual error gives that actual IP. I've included my my.cnf as well as well as my iptables from the destination server. On Destination Server... my.cnf: [mysqld] bind-address=0.0.0.0 tmp_table_size=512M max_heap_table_size=512M sort_buffer_size=32M read_buffer_size=128K read_rnd_buffer_size=256K table_cache=2048 key_buffer_size=512M thread_cache_size=50 query_cache_type=1 query_cache_size=256M query_cache_limit=24M #query_alloc_block_size=128 #query_cache_min_res_unit=128 innodb_log_buffer_size=16M innodb_flush_log_at_trx_commit=2 innodb_file_per_table innodb_log_files_in_group=2 innodb_buffer_pool_size=32G innodb_log_file_size=512M innodb_additional_mem_pool_size=20M join_buffer_size=128K max_allowed_packet=100M max_connections=256 wait_timeout=28800 interactive_timeout=3600 # modify isolation method for faster inserting. # Do not uncomment the line below unless you understand what this does. # transaction-isolation = READ-COMMITTED # do not reverse lookup clients skip-name-resolve #long_query_time=6 #log_slow_queries=/var/log/mysqld-slow.log #log_queries_not_using_indexes=On #log_slow_admin_statements=On datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 #Added by Magento ECG long_query_time=1 slow_query_log [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid iptables: :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 3306 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 225 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp -i eth1 --dport 11211 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT sudo netstat -ntpl Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:2123 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:1581 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN - tcp 0 0 :::11211 :::* LISTEN - tcp 0 0 :::22 :::* LISTEN - tcp 0 0 :::225 :::* LISTEN -

    Read the article

  • ASA5505 Novice. Setting up Outside/Inside/and DMZ as Guest Network

    - by GriffJ
    I need a little help in developing a config for our ASA5505. I'm an MCSA/MCITPAS but I don't have a lot of practical cisco experience. Here is what I need help with, we currently have a PIX as our boarder gateway and well it's antiquated and it only has a 50 user license which means I'm constantly clearing local-host throughout the day as people complain. I discovered that the last IT person bought at couple ASA5505s and they've been sitting in the back of a cupboard. So far I've duplicated the configuration from the pix to the asa but as I was going to be going this far I thought I'd go further and remove another old cisco router that was used only for the guest network, I know the asa can do both jobs. So I'm going to paste a scenario I wrote up with the actual IPs changed to protect the innocent. ... Outside Network: 1.2.3.10 255.255.255.248 (we have a /29) Inside Network: 10.10.36.0 255.255.252.0 DMZ Network: 192.168.15.0 255.255.255.0 Outside Network on e0/0 DMZ Network on e0/1 Inside Network on e0/2-7 DMZ Network has DHCPD Enabled. DMZ DHCPD Pool is 192.168.15.50-192.168.15.250 DMZ Network needs to be able to see DNS on Inside Network at 10.10.37.11 and 10.10.37.12 DMZ Network needs to be able to access webmail on inside network at 10.10.37.15 DMZ Network needs to be able to access business website on inside network at 10.10.37.17 DMZ Network needs to be able to access the outside network (access to the internet). Inside Network has NO DHCPD. (dhcp is handled by domain controller) Inside Network needs to be able to see anything on the DMZ network. Inside Network needs to be able to access the outside network (access to the internet). There is some access-list stuff already, some static mapping already. Maps external IPs from our ISP to our inside server IPs static (inside,outside) 1.2.3.11 10.10.37.15 netmask 255.255.255.255 static (inside,outside) 1.2.3.12 10.10.37.17 netmask 255.255.255.255 static (inside,outside) 1.2.3.13 10.10.37.20 netmask 255.255.255.255 Allows access to our Webserver/Mailserver/VPN from the Outside. access-list 108 permit tcp any host 1.2.3.11 eq https access-list 108 permit tcp any host 1.2.3.11 eq smtp access-list 108 permit tcp any host 1.2.3.11 eq 993 access-list 108 permit tcp any host 1.2.3.11 eq 465 access-list 108 permit tcp any host 1.2.3.12 eq www access-list 108 permit tcp any host 1.2.3.12 eq https access-list 108 permit tcp any host 1.2.3.13 eq pptp Here is all the NAT and route stuff I have so far. global (outside) 1 interface global (outside) 2 1.2.3.11-1.2.3.14 netmask 255.255.255.248 nat (inside) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 1.2.3.9 1

    Read the article

  • Installing and configuring Zend Framework 2 server-wide [Ubuntu] and test driving ZendSkeletonApplication

    - by kinologik
    I'm trying to have ZF2 installed for all my subdomains at once (Ubuntu 12.04). ZF2 just launched its first stable version, so I wanted to install it on my development server and finally get my hands dirty with it. I downloaded ZF2 and unzipped the files in /var/ZF2/ (which now contains Zend/[all components]). I then edited /etc/php5/apache2/php.ini and added the path to the ZF2 files: include_path = ".:/var/ZF2" I then downloaded the ZendSkeletonApplication and unzipped it in /var/www/skeleton. I know it is suggested to composer.phar to install ZF2 application, but: I don't want to make a local installation of ZF2... I want to make a server-wide installation be able to use my Zend components on all my domains/subdomains on my development server. Before using any automatic installation process, I'd really like to understand that process by doing it manually at first. Obviously, something goes wrong when I fire ZendSkeletonApplication, and I get the following when hit the following URL: http://www.myDevServer.com/skeleton/public/ Fatal error: Uncaught exception 'RuntimeException' with message 'Unable to load ZF2. Run `php composer.phar install` or define a ZF2_PATH environment variable.' in /var/www/skeleton/init_autoloader.php:48 Stack trace: #0 /var/www/skeleton/public/index.php(9): include() #1 {main} thrown in /var/www/skeleton/init_autoloader.php on line 48 I have skimmed through the docs, tutorials and the like, but there are no straight forward answer to this kind of configuration. In the official doc, in the (very short) installation chapter, I see a reference to adding an include path in PHP. But no example... http://zf2.readthedocs.org/en/latest/ref/installation.html Once you have a copy of Zend Framework available, your application needs to be able to access the framework classes found in the library folder. Though there are several ways to achieve this, your PHP include_path needs to contain the path to Zend Framework’s library. But then, when I get to the "Getting Started" chapter, it's all composer.phar and nothing else... http://zf2.readthedocs.org/en/latest/user-guide/skeleton-application.html I'm no sysAdmin, just a Zend enthusiast. I'm pretty sure this PEBKAC problem might be obvious for those who already got in ZF2 previous betas. Thanks for helping my out. EDIT: Problem was resolved, thanks to Daniel M. Just setting up ZF2_PATH in httpd.conf was all that was needed. SetEnv ZF2_PATH /var/ZF2 I also removed the include_path reference in php.ini and everything works just fine. So I have no idea why Zend suggested to include it there in their official docs.

    Read the article

  • APC PHP cache size does not exceed 32MB, even though settings allow for more

    - by hardy101
    I am setting up APC (v 3.1.9) on a high-traffic WordPress installation on CentOS 6.0 64 bit. I have figured out many of the quirks with APC, but something is still not quite right. No matter what settings I change, APC never actually caches more than 32MB. I'm trying to bump it up to 256 MB. 32MB is a default amount for apc.shm_size, so I am wondering if it's stuck there somehow. I have run the following echo '2147483648' > /proc/sys/kernel/shmmax to increase my system's shared memory to 2G (half of my 4G box). Then ran ipcs -lm which returns ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 2097152 max total shared memory (kbytes) = 8388608 min seg size (bytes) = 1 Also made a change in /etc/sysctl.conf then ran sysctl -p to make the settings stick on the server. Rebooted, too, for good measure. In my APC settings, I have mmap enabled (which happens by default in recent versions of APC). php.ini looks like: apc.stat=0 apc.shm_size="256M" apc.max_file_size="10M" apc.mmap_file_mask="/tmp/apc.XXXXXX" apc.ttl="7200" I am aware that mmap mode will ignore references to apc.shm_segments, so I have left it out with default 1. phpinfo() indicates the following about APC: Version 3.1.9 APC Debugging Disabled MMAP Support Enabled MMAP File Mask /tmp/apc.bPS7rB Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Oct 11 2011 22:55:02 Directive Local Value apc.cache_by_default On apc.canonicalize O apc.coredump_unmap Off apc.enable_cli Off apc.enabled On On apc.file_md5 Off apc.file_update_protection 2 apc.filters no value apc.gc_ttl 3600 apc.include_once_override Off apc.lazy_classes Off apc.lazy_functions Off apc.max_file_size 10M apc.mmap_file_mask /tmp/apc.bPS7rB apc.num_files_hint 1000 apc.preload_path no value apc.report_autofilter Off apc.rfc1867 Off apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 256M apc.slam_defense On apc.stat Off apc.stat_ctime Off apc.ttl 7200 apc.use_request_time On apc.user_entries_hint 4096 apc.user_ttl 0 apc.write_lock On apc.php reveals the following graph, no matter how long the server runs (cache size fluctuates and hovers at just under 32MB. See image http://i.stack.imgur.com/2bwMa.png You can see that the cache is trying to allocate 256MB, but the brown piece of the pie keeps getting recycled at 32MB. This is confirmed as refreshing the apc.php page shows cached file counts that move up and down (implying that the cache is not holding onto all of its files). Does anyone have an idea of how to get APC to use more than 32 MB for its cache size?? **Note that the identical behavior occurs for eaccelerator, xcache, and APC. I read here: http://www.litespeedtech.com/support/forum/archive/index.php/t-5072.html that suEXEC could cause this problem.

    Read the article

  • Is there a Distributed SAN/Storage System out there?

    - by Joel Coel
    Like many other places, we ask our users not to save files to their local machines. Instead, we encourage that they be put on a file server so that others (with appropriate permissions) can use them and that the files are backed up properly. The result of this is that most users have large hard drives that are sitting mainly empty. It's 2010 now. Surely there is a system out there that lets you turn that empty space into a virtual SAN or document library? What I envision is a client program that is pushed out to users' PCs that coordinates with a central server. The server looks to users just like a normal file server, but instead of keeping entire file contents it merely keeps a record of where those files can be found among various user PCs. It then coordinates with the right clients to serve up file requests. The client software would be able to respond to such requests directly, as well as be smart enough to cache recent files locally. For redundancy the server could make sure files are copied to multiple PCs, perhaps allowing you to define groups in different locations so that an instance of the entire repository lives in each group to protect against a disaster in one building taking down everything else. Obviously you wouldn't point your database server here, but for simpler things I see several advantages: Files can often be transferred from a nearer machine. Disk space grows automatically as your company does. Should ultimately be cheaper, as you don't need to keep a separate set of disks I can see a few downsides as well: Occasional degradation of user pc performance, if the machine has to serve or accept a large file transfer during a busy period. Writes have to be propogated around the network several times (though I suspect this isn't really much of a problem, as reading happens in most places more than writing) Still need a way to send a complete copy of the data offsite occasionally, and this would make it very hard to do differentials Think of this like a cloud storage system that lives entirely within your corporate LAN and makes use of your existing user equipment. Our old main file server is due for retirement in about 2 years, and I'm looking into replacing it with a small SAN. I'm thinking something like this would be a better fit. As a school, we have a couple computer labs I can leave running that would be perfect for adding a little extra redundancy to the system. Unfortunately, the closest thing I can find is Dienst, and it's just a paper that dates back to 1994. Am I just using the wrong buzzwords in my searches, or does this really not exist? If not, is there a big downside that I'm missing?

    Read the article

  • Can Remote Desktop Services be deployed and administered by PowerShell alone, without a Domain in WIndows Server 2012 and 2012 R2?

    - by Warren P
    Windows Server 2008 R2 allowed deployment of Terminal Server (Remote Desktop Services) without a domain, and without any insistence on domains. This was very useful, especially for standalone virtual or cloud deployments of a server that is managed remotely for a remote client who has no need or desire for any ActiveDirectory or Domain features. This has become steadily more and more difficult as Microsoft restricts its technologies further and further in each Windows release. With Windows Server 2012, configuring licensing for Remote Desktop Services, is more difficult when not on a domain, but possible still. With Windows Server 2012 R2 (at least in the preview) the barriers are now severe: The Add/Remove Roles and Features wizard in Windows Server 2012 R2 has a special RDS deployment mode that has a rule that says if you aren't on a domain you can't deploy. It tells you to create or join a domain first. This of course comes in direct conflict with the fact that an Active Directory domain controller should not be the same machine as a terminal server machine. So Microsoft's technology is not such much a Cloud Operating System as a Cluster of Unwanted Nodes, needed to support the one machine I actually WANT to deploy. This is gross, and so I am trying to find a workaround. However if you skip that wizard and just go check the checkboxes in the main Roles/Features wizard, you can deploy the features, but the UI is not there to configure them, and when you go back to the RDS configuration page on the roles wizard, you get a message saying you can not administer your Remote Desktop Services system when you are logged in as a Local-Computer Administrator, because although you have all admin priveleges you could have (in your workgroup based system), the RDS configuration UI will not accept those credentials and let you continue. My question in brief is, can I still somehow, obtain the following end result: I need to allow 10-20 users per system to have an RDS (TS) session. I do not need any of the fancy pants RDS options, unless Microsoft somehow depends on those features being present. I believe I need the "RDS Session Host" as this is the guts of "Terminal Server". Microsoft says it is "full Windows desktop for Remote Desktop Services client. I need to configure licensing so that the Grace Period does not expire leaving my RDS non functional, so this probably means I need a way to configure TS CALs. If all of the above could technically be done with the judicious use of the PowerShell, I am prepared to even consider developing all the PowerShell scripts I would need to do the above. I'm not asking someone to write that for me. What I'm asking is, does anyone know if there is a technical impediment to what I want to do above, other than the deliberate crippling of the 2012 R2 UI for Workgroup users? Would the underlying technologies all still work if I manipulate and control them from a PowerShell script? Obviously a 1 word Yes or No answer isn't that useful to anyone, so the question is really, yes or no, and why? In the case the answer is Yes, then how.

    Read the article

  • Network Services disabled (not starting) on Windows XP

    - by Rickesh John
    I am currently running Windows XP Service Pack 3 on my system. But today, when I failed to connect to the internet, via a LAN cable, I realized that almost all of the vital network services had stopped functioning. Any attempts to start it through services.msc gives me the following message: Could not start the DNS Client Service on Local Computer Error 1068: The dependency service group failed to start All my software or services that are related to networking have stopped functioning, for example, Windows Firewall is turned off permanently, so is my Avast Anti-Virus' service of Real Time Shields and Web Shield. When I insert the LAN wire into my laptop, it registers itself, but this is what I get when I do a ping localhost C:>ping localhost Unable to contact IP driver, error code 2 Moveover, with ipconfig I get this : Windows IP Configuration An internal error occurred: The request is not supported. Please contact Microsoft Product Support Services for further help. Additional Information: Unable to query host name On some further poking around, I saw that none of the "NETWORK SERVICE" process in task manager, except svchost.exe were running. Also, when I first opened the task manager, I saw some 20 processes running with username column empty for most of them. With some search in Google, I found out that these services were important, DHCP DNS Net logon Network connection Network location Awareness TCP/IP Net BIOS Helper none of them, except Network Connections are working, they do not start. The event viewer of my system shows a bunch of 7000 and 7001 event errors. I have tried re installing the network driver, booting in safe mode with networking and tried to enable those services mentioned above. I had disabled System Restore some time back, so I have no restore points for my system. I tried a lot of things from Google searches but none of them worked. Also, with such a long list of issue, I am a little confused as to what should I search on the internet. :( One more thing I would like to mention, previous morning, my anti-virus Avast detected a RootKit buried deep in my system folders. It was removed, but maybe this was a problem caused by the root kit. I did run a boot-time scan but no viruses were found. Please please please advice. Is formatting and re-installation of Windows my only option?

    Read the article

  • Two-Hop SSH connection with two separate public keys

    - by yigit
    We have the following ssh hop setup: localhost -> hub -> server hubuser@hub accepts the public key for localuser@localhost. serveruser@server accepts the public key for hubuser@hub. So we are issuing ssh -t hubuser@hub ssh serveruser@server for connecting to server. The problem with this setup is we can not scp directly to the server. I tried creating .ssh/config file like this: Host server user serveruser port 22 hostname server ProxyCommand ssh -q hubuser@hub 'nc %h %p' But I am not able to connect (yigit is localuser): $ ssh serveruser@server -v OpenSSH_6.1p1, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /home/yigit/.ssh/config debug1: /home/yigit/.ssh/config line 19: Applying options for server debug1: Reading configuration data /etc/ssh/ssh_config debug1: Executing proxy command: exec ssh -q hubuser@hub 'nc server 22' debug1: permanently_drop_suid: 1000 debug1: identity file /home/yigit/.ssh/id_rsa type 1000 debug1: identity file /home/yigit/.ssh/id_rsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_dsa type -1 debug1: identity file /home/yigit/.ssh/id_dsa-cert type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa type -1 debug1: identity file /home/yigit/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA cb:ee:1f:78:82:1e:b4:39:c6:67:6f:4d:b4:01:f2:9f debug1: Host 'server' is known and matches the ECDSA host key. debug1: Found key in /home/yigit/.ssh/known_hosts:33 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/yigit/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/yigit/.ssh/id_dsa debug1: Trying private key: /home/yigit/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). Notice that it is trying to use the public key localuser@localhost for authenticating on server and fails since it is not the right one. Is it possible to modify the ProxyCommand so that the key for hubuser@hub is used for authenticating on server?

    Read the article

  • Be your own cloud [closed]

    - by Jedi
    I have reasonably many electronic gadgets that can go LAN or WI-FI. But how do you share and/or syncronize all your files among them? Well, between my laptops and my desktop I use Dropbox. A nice way to share files among computers. But what if your HDD on your laptop is not large enough to carry music, pictures and films. Normally you would buy an extern USB HDD and store them there, but then you cannot reach the files from other computers which are not connected to the USB device. Many would say I should use a solution like a cloud with a disc station or something like that. But my needs are follows: A mass storage which can be reached among devices (laptops, desktops, iPhone, Android phone, XBox or Playstation). Has low power requirements and is silent. Can be reached inside home and it would be nice if it could be reached outside home as well. Cheap I have looked around and I have found an wireless router which can share a USB device: D-Link Wireless N HD Media Router. I thought it would be an interesting solution for a simple local cloud solution. D-Link uses a little program called SharePort Plus which mount the USB device to your computer. Unfortunately is the transfer rate to the USB storage device rather disappointing. The transfer rate was 5.8 Mbps even though the distance between the laptop and the router was 2 meter. The same is happening when I use cable from the computer to the router. Another thing is that SharePort Plus only allows one computer be connected to the device at a time. The last thing was something I could live with. I have search on the Internet for other solutions and found this video from Synology. I'm not sure if their solution is the right one. I think a disc station connected to my home LAN could the right solution. What have you done in your home to store and share files among your computers and game consoles?

    Read the article

< Previous Page | 616 617 618 619 620 621 622 623 624 625 626 627  | Next Page >