Search Results

Search found 17856 results on 715 pages for 'setup py'.

Page 574/715 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • How can I connect my Xbox to my Mac on my network

    - by codecowboy
    I have a wireless router/modem (Router 1) in my living room. This is connected to the internet (cable). Wireless is disabled as the router has a terrible wireless range. My Xbox is connected via ethernet to Router 1. Another LAN output from Router 1 connects to a powerline adapter. Router 1 acts as a DHCP server on 192.168.0.x and has the IP 192.168.0.1 In a second room I have Router 2. This has the powerline feed from Router 1 going into the WAN socket. This router runs the Tomato Firmware and acts as a wireless router for the rest of the house using the IP range 192.168.1.x. Router 2 IP is 192.168.1.1. My Mac is connected to Router 2 using a LAN cable and has the IP 192.168.0.133. Several mobile devices need wireless access. I want an ethernet connection to my Mac, not wireless. I should be able to use software like Connect360 to share media from my Mac to the XBox but the XBox does not see my Mac. I can ping 192.168.0.1 from the Mac. Is this possible using my current setup? If so, how?

    Read the article

  • Wireless connection drops when wired computer starts a game.

    - by Skadlig
    Starting this week I have had a strange problem on my network. Some background of my setup: Internet is provided by a adsl-modem. A D-link Dir-600 router is hooked up to to adsl-modem. My computer is hooked up to the router using a cat-5 cable. My wife's computer is hooked up using a wireless usb dongle, TP-Link TL-WN821N. Both computers use windows 7 64-bit home premium. Up until this week everything was normal, we could for instance play Dungeons & Dragons Online together without any network issues. Now every time I start DDO or any other network game, for instance L4D, the whole wireless network drops. I have confirmed that it's not just her computer using an Samsung Galaxy Spica android phone. Shutting down the game on my end restores the wireless connection automagicly. My wife can start DDO without the net dropping but if I plug in a wireless network card in my computer and start up the game the connection drops. So it seems like something my computer, and my computer only, does when starting a game makes the wireless connection write a sad note and kill it self but for the life of me I can't figure out what that might be. I could hook her computer up using cat-5 but I would prefer not to do that. Does anyone have any suggestions as to what the problem might be, what I can do to fix it or what I should do to get more data regarding what is happening?

    Read the article

  • Virtual Machine Network Architecture, Isolating Public and Private Networks

    - by Mark
    I'm looking for some insight into best practices for network traffic isolation within a virtual environment, specifically under VMWARE ESXi. Currently I have (in testing) 1 hardware server running ESXi but i expect to expand this to multiple pieces of hardware. The current setup is as follows: 1 pfsense VM, this VM accepts all outside (WAN/internet) traffic and performs firewall/port forwarding/NAT functionality. I have multiple public IP addresses sent to the this VM that are used for access to individual servers (via per incoming IP port forwarding rules). This VM is attached to the private (virtual) network that all other VMs are on. It also manages a VPN link into the private network with some access restrictions. This isn't the perimeter firewall but rather the firewall for this virtual pool only. I have 3 VMs that communicate with each other, as well as have some public access requirements: 1 LAMP server running an eCommerce site, public internet accessible 1 accounting server, access via windows server 2008 RDS services for remote access by users 1 inventory/warehouse management server, VPN to client terminals in warehouses These servers constantly talk with each other for data synchronization. Currently all the servers are on the same subnet/virtual network and connected to the internet through the pfsense VM. The pfsense firewall uses port forwarding and NAT to allow outside access to the servers for services and for server access to the internet. My main question is this: Is there a security benefit to adding a second virtual network adapter to each server and controlling traffic such that all server to server communication is on one separate virtual network, while any access to the outside world is routed through the other network adapter, through the firewall, and on the the internet. This is the type of architecture i would use if these were all physical servers, but i'm unsure if the networks being virtual changes the way i should approach locking down this system. Thank you for any thoughts or direction to any appropriate literature.

    Read the article

  • SQL Server Offsite Backups

    - by Eric Maibach
    We have about !TB of SQL Server databases, and these databases generate about 200GB of data changes each day. Up to this point we have been doing Weekly full backups, daily diff backups, and hourly transaction log backups. The full and diff backups are backed up to tape and taken offsite each day. We have been trying to move away from tapes, and our IT department purchased a Barracuda Backup device that backups up data and then sends it offsite using our internet connection. I have been trying to get this to work for our SQL Server backups, and have ran into a number of problems. I normally like to just use SQL Server to perform backups instead of trying to use a agent, so that is what I tried first. However the Barracuda device was not able to dedup these files very well, so it ended up being to much data to try to send offsite and to archive. I then tried installing the Barracuda agent and using it to backup the SQL Server databases. However the problem I am having there is that on some of the database servers I also have files that need backed up, and I cannot find a way to create seperate backup schedules for the file backups and the SQL Server backups. Barracuda only does full or transaction log backups. So if I want to do hourly transaction log backups I end up doing a file system backup every hour (which is not good), or if I only schedule the backups to run once a night I either have to do a full backup every night, or only do a transaction backup once a day. None of these scenarios are good options. My question is, how is everyone else getting their large SQL Server database backups offsite. Are you just using tape, or have you found a offsite backup device that works well? Is anybody else using Barracuda to backup their SQL Server databases? If you do, then how do you have it setup?

    Read the article

  • What is the difference between disabling hibernation and idling time for a NAS?

    - by Gary M. Mugford
    I have two D-LINK DNS-323 NAS boxes with two Seagate drives in each. The first one is about a year old, the second one about three months. The first two on Monster are each 1.5T drives while the last two on Origami are 2T drives. I have never been overly happy with the Monster drives but, outside of poor throughput on small files, they have been consistently available to all programs after I put a batch file into my startup to do a directly listing of each. I added the two new drives when I added the Origami box. But, watching the dos box that comes up, I rarely see both listed before the box disappears. Other programs, backups, Belarc, even my file browsers, seem to have a dickens of a time seeing O: and P:. Finally, I decided to go into setup and turn off hibernation. Performance HAS been better since and Belarc, for instance, now sees both drives. At the time of poking around, I noticed an Idle Time feature too. What is the difference between the two settings? And for added points, how much trouble am I in for turning off hibernation? The super bonus round ... anything ELSE I should have done? Thanks in advance, GM

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • SVN Authentication with LDAP and Active Directory

    - by Alex Holsgrove
    I am having a few problems getting SVN authentication to work with LDAP / Active Directory. My SVN installation works fine, but after enabling LDAP in my apache vhost, I just can't get my users to authenticate. I can use a selection of LDAP browsers to successfully connect to Active Directory, but just can't seem to get this to work. SVN is setup in /var/local/svn Server is svn.domain.local For testing, my repository is /var/local/svn/test My vhost file is as follows: <VirtualHost *:80> ServerAdmin [email protected] ServerAlias svn.domain.local ServerName svn.domain.local DocumentRoot /var/www/svn/ <Location /test> DAV svn #SVNListParentPath On SVNPath /var/local/svn/test AuthzSVNAccessFile /var/local/svn/svnaccess AuthzLDAPAuthoritative off AuthType Basic AuthName "SVN Server" AuthBasicProvider ldap AuthLDAPBindDN "CN=adminuser,OU=SBSAdmin Users,OU=Users,OU=MyBusiness,DC=domain,DC=local" AuthLDAPBindPassword "admin password" AuthLDAPURL "ldap://192.168.1.6:389/OU=SBSUsers,OU=Users,OU=MyBusiness,DC=domain,DC=local?sAMAccountName?sub?(objectClass=*)" Require valid-user </Location> CustomLog /var/log/apache2/svn/access.log combined ErrorLog /var/log/apache2/svn/error.log </VirtualHost> In my error.log, I don't seem to get any bind errors (should I be looking elsewhere?), but just the following: [Thu Jun 21 09:51:38 2012] [error] [client 192.168.1.142] user alex: authentication failure for "/test/": Password Mismatch, referer: http://svn.domain.local/test/ At the end of "AuthLDAPURL", I have seen people using TLS and NONE but neither seem to help in my case. I have the ldap modules loaded and have checked as much as I know, so any help would be most welcome. Thanks

    Read the article

  • MySQL Config on Large Machine

    - by Jonathon
    We have a Windows 2003 Enterprise Edition server (64bit) running only MySQL 5.1.45 64-bit. It has 16G RAM and 10T of hard-drive space in RAID 10. We are having horrible performance from mysqld (85-100% CPU utilization). We were running a smaller machine with better performance, so I am assuming our my.ini file is not correct for our current machine. The my.ini file is as follows: [client] port=3306 [mysql] default-character-set=latin1 [mysqld] port=3306 basedir="D:/MySQL/" datadir="D:/MySQL/data" default-character-set=latin1 default-storage-engine=MYISAM sql-mode="" skip-innodb skip-locking max_allowed_packet = 1M max_connections=800 myisam_max_sort_file_size=5G myisam_sort_buffer_size=500M table_open_cache = 512 table_cache=8000 tmp_table_size=30M query_cache_size=50M thread_cache_size=128 key_buffer_size=3072M read_buffer_size=2M read_rnd_buffer_size=16M sort_buffer_size=2M #replication settings (this is the master) log-bin=log server-id = 1 Does anyone see anything wrong with this setup? For a machine with this much RAM, why in the world would mysqld eat up so much CPU? I know we can optimize some queries, etc., but it did run okay on a smaller machine, so I am pretty sure it is the config. Thanks in advance for any help.

    Read the article

  • Why is my second monitor not working?

    - by StampedeXV
    Since I have my new computer, I have a very weird problem. Facts: New Computer: Motherboard: ASRock Z77 Pro 3 Graphics-card: Asus1GB D5 X EN GTX560 DCII OC/2DI R CPU: Intel i5-3570 Windows 7 64bit 500W beQuiet special edition (92% efficiency) 8GB 1333MHz DDR3 Corsair RAM (CL9) Scythe Mugen 2 2 magnetic HDDs + 1 SDD 1 DVD-R Old Computer: Motherboard: Asus P55 something Graphics-card: Asus1GB D5 X EN GTX560 DCII OC/2DI R CPU: Intel i7-870 Windows 7 64bit 550W Corsair 8GB 1333MHz DDR3 Corsair RAM (CL9) Scythe Mugen 3 2 magnetic HDDs + 1 SDD 1 DVD-R On the old computer it worked fine with two monitors. Moving to the new (I took the same Graphics-card) it only works with one. The weird thing I mentioned is: not matter which one. But if I put both there, only one is available. There is no reaction at the start (where normally (at least if I remember correctly) the monitor shortly went from "standby" to "on"). Windows does not recognize a second monitor in the Device Manager. I have the latest drivers for Motherboard and Graphics-card. I have the latest BIOS drivers. I am out of ideas. Edit: completed computer setup

    Read the article

  • Help setting up an secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

  • Can I take my ReadyNAS drive in Raid1 and plug it straight into new different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS? Edit: Ok after further reading it seems that the ReadyNAS Duo formats my drive as ext3 in 16k blocks. There are instructions for mounting a drive into a linux box here: Mounting Sparc-based ReadyNAS Drives in x86-based Linux There is also talk about a linux image here: ReadyNAS Data Recovery - VMware recovery tool I'm not sure whether this means they ReadyNAS actually implements software raid under the hood, or what? So it appears like it IS do-able, but do any of you linux guru's know whether this is viable and whether the fact that they are in raid 1 affect matters?

    Read the article

  • Puppet - Is it possible to use a global var to pull in a template with the same name?

    - by Mike Purcell
    I'm new to puppet. As such I am trying to work my way around the best way to setup my manifests that make sense. Following the DRY (don't repeat yourself) principle, I am trying to load common directives in one template, then load in environment specific directives from a file matching the environment. Basically like this: # nodes.pp node base_dev { $service_env = 'dev' } node 'service1.ownij.lan' inherits base_dev { include global_env_specific } class global_env_specific { include shell::bash } # modules/shell/bash.pp class shell::bash inherits shell { notify{"Service env: ${service_env}": } file { '/etc/profile.d/custom_test.sh': content => template('_global/prefix.erb', 'shell/bash/global.erb', 'shell/bash/$service_env.erb'), mode => 644 } } But every time I run puppet agent --test puppet complains that it can't find the shell/bash/$service_env.erb file, but I double checked that it exists. I know the var is accessible due to the notify statement outputting the expected value, so I suspect I am doing which is not allowed. I know I could have a single template.erb and pass variables to the template, which would work in this case because the custom.sh file is small and not many changes across environments, but for more complex configs (httpd, solr, etc) I'd prefer to access environment specific files. I am also aware that I can specify environment specific module paths, but I'd prefer to just handle this behavior at the template level, instead of having several, closely named directories. Thanks.

    Read the article

  • When connecting to PPTP Centos via Windows 7 VPN, I get error 2147943625

    - by Charlie Dyason
    The remote computer refused the network connection. phrase has been my arch enemy for the past week now I recently "bought" a VPS server, I gave up trying to configure it with OpenVPN, all the issues were making me lose my mind, so I tried the easier way with pptp, but i figure, both are leading to a dead end... I followed this post (many others too but this is the unlucky one), http://blog.secaserver.com/2011/10/install-vpn-pptp-server-centos-6/ and it all goes well with the setup, however, I run into this error when connecting to the VPN in Windows 7 here is a pic of the error: Image So I do not know what I have done wrong... When connecting, Code: Select all netstat -apn | grep -w 1723 before connecting: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd after the error came I tried again: netstat -apn |grep -w 1723 tcp 0 0 0.0.0.0:1723 0.0.0.0:* LISTEN 1137/pptpd tcp 0 0 41.185.26.238:1723 41.13.212.47:49607 TIME_WAIT - iptables: # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [63:8868] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i eth0 -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -i eth0 -p tcp -m tcp --dport 1723 -j ACCEPT -A INPUT -i eth0 -p gre -j ACCEPT -A FORWARD -i ppp+ -o eth0 -j ACCEPT -A FORWARD -i eth0 -o ppp+ -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Fri Nov 1 18:14:53 2013 # Generated by iptables-save v1.4.7 on Fri Nov 1 18:14:53 2013 *nat : PREROUTING ACCEPT [96:12732] : POSTROUTING ACCEPT [0:0] : OUTPUT ACCEPT [31:2179] -A POSTROUTING -o eth0 -j MASQUERADE COMMIT # Completed on Fri Nov 1 18:14:53 2013 options.pptpd the only changes was the require-mppe # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 require-mppe # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} I check the iptables, everything is normal, all INPUTs, etc are before rejects, username and password I also checked in chap-secrets file, I am really puzzled...

    Read the article

  • Guest can't access host windows network share

    - by Asteroza
    HI folks, I've recently run into a strange problem after upgrading to VMware player 3. Certain virtual machines (currently an XP and a VIsta VM) seem to have lost the ability to access the host (XP) network shared folders (SMB). Both VM machines are bridged networking, firewall is up. Host firewall is up. Host and guests use DHCP. All OS are workgroup connected. The Vista VM I am not completely sure, but the XP VM did have access to the host's network shared folders after the player upgrade. Then today it wouldn't work, network path can't be found. Now here's the wierd part. The host's network shared folders can be accessed properly by other PC's on the network (and as far as I know, no settings have been changed). The host is pingable from the guests, and name resolution works. The guests can access network shares on other PC's in the network, and access the internet. My Network Places shows the host PC, but double clicking on it takes a long time before it finally times out with an error. Doing a wireshark packet capture, the guest is sending out the protocol negotiation, and the host is sending a response, but after that the guest behaves like it didn't receive anything and is doing TCP retransmissions. Anybody have any idea what could be wrong? Yes I know I can drag and drop files or setup the special VMware shared folders, but I want to access the host just like any other network accessible shared folder. It just seems really odd when any other computer works, just not between the guest and host.

    Read the article

  • Can Current Backflow from Powered Hub's Adapter & cause PC Damage?

    - by SuperUserMan
    Getting this short: Can current flow from a powered USB hub's power adapter (lying 10 Meter away) back to computer via usb port and cause damage to Computer components like mobo, etc? What should be my concerns? Using a 2 Amp 5V Power adapter to power a 10m Long Active Repeater USB extension cable with 4 port HUB & plugging into PC's Front port, causes PC Chassis fan to keep running (thought slower than regular speed) Front Chassis HDD & power LED to turn on (though bit dim) may be other things which i cant detect/see at chip level, in motherboard?? All this even after PC is shut down (bit scary) More detail (in case still want to read): To run 4 High power (needing 450 mAmps) Wifi Adapters, far away from PC, Bought Active Repeater USB Extension Cable with 4 Ports & power port at far end http://www.ebay.com/itm/33FT-USB-2-0-Male-to-Female-Extension-Cable-Hub-Splitter-Adapter-with-4-USB-Port-/390846115254 Then added a locally bought 2 Amp 240V AC to 5V DC Power Adapter and plugged into USB hub which is a part of & situated at far end of a 10 Meter Active Repeater usb extension cable. Even 4 Wifi Adapters run fine (appear to) using this setup, but running chassis fan, dimly lighted Power & HDD LED, even when PC is switched off is bit scary and surely mean 5V & some current is flowing all though that 10 meter extension cable into my USB port & powering stuff. Can this cause damage? and what should be my concerns. Of course I can't switch off the power adapter (lying 10 meters away from PC) every time I switch off my PC to prevent this.

    Read the article

  • SSH Private Key Not Working in Some Directories

    - by uesp
    I have a strange issue where SSH won't properly connect with a private-key if the key file is in certain directories. I've setup the keys on a set of servers and the following command ssh -i /root/privatekey [email protected] works fine and I login to the given host without getting prompted by a password, but this command: ssh -i /etc/keyfiles/privatekey [email protected] gives me a password prompt. I've narrowed it down that this behavior occurs in only some sub-directories of /etc/. For example /etc/httpd1/ gives me a password prompt but /etc/httpd/ does not. What I've checked so far: All private key files used are identical (copied from the original file). The private key file and directories used have identical permissions. No relevant error messages in the server/client logs. No interesting debug messages from ssh -v (it just seems to skip the key file). It happens with connecting to different hosts. After more testing it is not the actual directory name. For example: mkdir /etc/test cp /root/privatekey /etc/test ssh -i /etc/test/privatekey [email protected] # Results in password prompt cp /root/privatekey /etc/httpd # Existing directory ls -ald test httpd # drwxr-xr-x 4 root root 4096 Mar 5 18:25 httpd # drwxr-xr-x 2 root root 4096 Mar 5 18:43 test ssh -i /etc/httpd/privatekey [email protected] # Results in *no* prompt rm -r test cp -R /etc/httpd /etc/test ssh -i /etc/test/privatekey [email protected] # Results in *no* prompt` I'm sure its just something simple I've overlooked but I'm at a loss.

    Read the article

  • How to make a Linux software RAID1 detect disc corruption?

    - by Paul
    This is one of the nightmare days: A virtualized server running on a Linux SW-RAID1 runs a VM that exhibits random segfaults in seemingly random codechunks. While debugging I find that a file gives different md5sums on each and every run. Digging deeper I find this: The raw disc partitions that make up the RAID1 mirror contain 2 bit-differences and ca. 9 sectors are completely empty on one disc and filled with data on the other disc. Obviously Linux gives back a sector from a undeterministically chosen disc of the mirror set. So sometimes the same sector is returned OK, sometimes the corrupted is given back. The docs say: RAID cannot and is not supposed to guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted. Thanks. That will help me sleep. :-/ Is there a way to have Linux at least detect this corruption by using sector checksumming or something like that? Would this be detected in a RAID5 setup? Is this the moment I wish I used ZFS or btrfs (once it becomes usable without uber-admin capabilities)?

    Read the article

  • Permission problem with Git (over SSH) on FreeBSD

    - by vpetersson
    We're having permission problem with Git on FreeBSD. The setup is fairly straight forward. We have a few different repos on the same server. For simplicity, let's say they reside in /git/repo1 and /git/repo2. Each repo is owned by the user 'git' and a self-titled group (eg. repo1). The repo is configured with g+rwX access. Every user who commits to the repository is also member of the group for the repo (eg. repo1). The Git repositories all have 'sharedRepository = group' set. So far so good, all users can check out the code from the repositories, and the first user can commit without any problem. However, when the next user tries to commit to the repositories, he will receive a permission error. We've been banging our heads with this issue for some time now, and the only way we've managed to resolve it is by running the following script between commits (which is obviously very inconvenient): find /git/repo1 -type d -exec chmod g+s {} \; chmod -R g+rwX /git/repo1 chown -R git:repo1 /git/repo1/ cd /git/repo1 git gc Anyone got a clue to where the problem lies?

    Read the article

  • Outlook won't re-connect to exchange after network is re-connected

    - by stan503
    I have a setup at my desk where I connect my computer to a an RJ45 switch that switches between two networks. One network is the corporate network, which is maintained by my company's IT, and the other is my own private network where I do testing (the two networks have to be separated). The corporate network hosts the exchange server where I get e-mail. When I switch from the private network to the corporate network, I expect Outlook to re-connect to the exchange server. However, I have found that sometimes when I come back, Outlook take an extremely long time to re-connect. Send/Receive will give me back the error 'The server is not available' (0x8004011D). It will sit there for 10 minutes to a few hours before it finally re-connects. The only other option is to reboot my computer, which is a huge pain for me since I run multiple VMs on it. This usually happens when I'm connected to the private network for a significant amount of time, so I'm thinking it's because Outlook has cached the network status. Is there a way to force Outlook to do a 'hard' re-connect to the exchange server? I'm using Windows XP SP 3 with Outlook 2007.

    Read the article

  • Setting up a network where packets are traced

    - by Marcus
    My situation is the following: I have an internet connection, which is shared between people. More or less obviously, people is using it to download illegal stuff. Since I'm the owner of the connection, I want to avoid being sued. I don't want to prevent the people from doing the things they want, but I want to be legally safe. Now, I have relatively little competences in network administration, so I was wondering: is it possible to setup a network, where the source and destination of the packets are logged? I would use this to prove, in case of lawsuit, that the traffic was coming from a given machine. if the idea is feasible, is there any wireless router on which I can install linux, where I can install the packet sniffer? how much space could the logs take (containing only the timestamp/source/destination), per GB of traffic? a very rough estimation would be very helpful. if a machine on my network is sending bittorrent packets to a certain IP, would this log be able to reflect the time, source ip and destination ip? I assume that obviously the torrent data would be encrypted and un-decryptable. Am I missing something? Is there a better strategy? Any pointer to documentation would be helpful as well - in that case, I would use this as starting point.

    Read the article

  • Substiting a line through PHP in SSH

    - by Asad Moeen
    I've already setup SSH usage in PHP and most of the things work. Now what I want to do is that I'm looking to edit a line in a file and replace it back. It works directly on the server but can't seem to get it working with PHP files. Here is what I'm trying. $new_line1 = 'Line $I want to add - The $I has to go into the file as it is'; $new_line2 = 'Ending $text of the line - $text again goes into file; $query = "Addition to line"; $exec1= 'cd /root; perl -pe "s/.*/' ; $exec2= '/ if $. == 37" Edit.sh > Edited.sh'; $new="$exec1$new_line1$query$new_line2$exec2"; $edit="cd /root/mp; cp Edited.sh Edit.sh"; echo $ssh->exec($new); echo $ssh->exec($edit); Now the thing is that running the perl command directly in SSH works without any errors but when I run this through PHP I get the error: Substitution replacement not terminated at -e line 1. I want to know why would it work this way and not that?

    Read the article

  • Unable to browse to apache service, Service is running

    - by Jeff
    Summary I have a very peculiar problem. I am not able to open the "It Works!" page after installing a fresh server with apache. I am able to ssh to the box (from outside the network). Apache seems to be running on my Centos6.4x86_64 box just fine. Nothing useful in /var/logs/httpd/*. What am I missing? The setup I am outside the network right now. The "server" is a VM on my home computer running bridged mode. public ip: A.B.C.D Host: 192.168.1.5 VM: 192.168.1.8 I have a verizon fios router that is forwarding ports 22, 80, and 8888 to the VM. I am able to ssh over port 22, but I am not able to browse to the public URL over port 80. so A.B.C.D:22 is working, but http://A.B.C.D:80 is not. What I've tried nmap to see if it is listening: nmap -sT -O localhost Starting Nmap 5.51 ( http://nmap.org ) at 2013-10-25 11:10 EDT Nmap scan report for localhost (127.0.0.1) Host is up (0.000040s latency). Other addresses for localhost (not scanned): 127.0.0.1 Not shown: 996 closed ports PORT STATE SERVICE 22/tcp open ssh 25/tcp open smtp 80/tcp open http 3306/tcp open mysql I tried going to it locally (lynx) and it does work. So, is the problem in my ports?

    Read the article

  • Extremely high mysqld CPU usage with no active queries

    - by RadarNyan
    I have a VPS running Ubuntu 12.04 LTS with LEMP stack, followed the guide from Linode Library (since I'm using a Linode) to setup, and everything worked fine until now. I don't know what's wrong, but my CPU usage just goes up since a week ago. Today things getting really bad - I got 74% CPU usage so I went check and found that mysqld taking too much CPU usage (somewhere around 30% ~ 80%) So I did some Google Search, tried disable InnoDB, restart mysql, reset ntp / system clock (Isn't this bug supposed to happen more than a year ago?!) and reboot my VPS, nothing helped. Even with mysql processlist empty, I still get mysqld CPU usage very high. I don't know what I missed and have totally no idea, any advice would be appreciated. Thanks in advance. Update: I got these from running "strace mysqld" write(2, "InnoDB: Unable to lock ./ibdata1"..., 44) = 44 write(2, "InnoDB: Check that you do not al"..., 115) = 115 select(0, NULL, NULL, NULL, {1, 0}^[[A^[[A) = 0 (Timeout) fcntl64(3, F_SETLK64, {type=F_WRLCK, whence=SEEK_SET, start=0, len=0}, 0xbfa496f8) = -1 EAGAIN (Resource temporarily unavailable) hum... I did tried to disable InnoDB and it didn't fix this problem. Any idea? Update2: # ps -e | grep mysqld 13099 ? 00:00:20 mysqld then use "strace -p 13099", the following lines appears repeatedly: fcntl64(12, F_GETFL) = 0x2 (flags O_RDWR) fcntl64(12, F_SETFL, O_RDWR|O_NONBLOCK) = 0 accept(12, {sa_family=AF_FILE, NULL}, [2]) = 14 fcntl64(12, F_SETFL, O_RDWR) = 0 getsockname(14, {sa_family=AF_FILE, path="/var/run/mysqld/mysqld.sock"}, [30]) = 0 fcntl64(14, F_SETFL, O_RDONLY) = 0 fcntl64(14, F_GETFL) = 0x2 (flags O_RDWR) setsockopt(14, SOL_SOCKET, SO_RCVTIMEO, "\36\0\0\0\0\0\0\0", 8) = 0 setsockopt(14, SOL_SOCKET, SO_SNDTIMEO, "<\0\0\0\0\0\0\0", 8) = 0 fcntl64(14, F_SETFL, O_RDWR|O_NONBLOCK) = 0 setsockopt(14, SOL_IP, IP_TOS, [8], 4) = -1 EOPNOTSUPP (Operation not supported) futex(0xb786a584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0xb786a580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0xb7869998, FUTEX_WAKE_PRIVATE, 1) = 1 poll([{fd=10, events=POLLIN}, {fd=12, events=POLLIN}], 2, -1) = 1 ([{fd=12, revents=POLLIN}]) er... now I totally don't get it x_x help

    Read the article

  • Standalone WLST for both WebLogic 8.1 and 9.2?

    - by imiric
    Hi, I'm writing a simple script to facilitate changing JDBC connection URLs in several WL environments, among these both v8.1 and v9.2. I want to create a standalone script, outside of any WL installation, just including wlst.jar/jython.jar/weblogic.jar, that will work both on WL 8.1 and 9.2 (obviously by referencing different MBeans). Now, this works OK for WL 8.1. I copy weblogic.jar from the server, and have managed to get ahold of both wlst.jar and jython.jar (wasn't easy, Oracle doesn't host them anymore). Also I need to make sure to locally run under the same JRE as the server (WL8.1 runs on Java 1.4.2). But if I try to connect to WL 9.2 from this setup, I get a NullPointerException when trying to access any MBean (probably because I'm running on JRE 1.4.2 and WL 9.2 uses 1.5.0). Also, I am unable to create a standalone environment for WL 9.2. If I copy weblogic.jar from 9.2 and run WLST like so: java -cp "wlst.jar:jython.jar:weblogic-92.jar" weblogic.WLST I get a java.lang.NoClassDefFoundError: weblogic/management/configuration/RepositoryMBean error. I can't find this class in weblogic92/server/lib, but it IS inside weblogic.jar from WL 8.1. So I'm really losing my patience here... Is there any way to create a standalone WLST client that can connect to any version of WebLogic (8.1 & 9.2 in the meantime)? I really wouldn't want to have to ssh into the WL environment to run my WLST script... Any ideas/suggestions are welcome. Thanks, Ivan

    Read the article

  • virtual memory commited

    - by vinu
    After a server bounce happens, and after around 40-45 days time period, we receive continuous “Committed Virtual Memory” alerts which indicates the usage of swap space in the magnitude of 4GB This also causes the application to perform very slowly and experience a number of stalled transactions. Server Setup: 4 Tomcat Servers (version 7.0.22) that are load balanced (not clustered) by 2 Apache Servers. And the Apache servers themselves supply static content and routing to these 4 tomcat servers. Java Runtime Version: java version "1.6.0_30" Java(TM) SE Runtime Environment (build 1.6.0_30-b12) Java HotSpot(TM) 64-Bit Server VM (build 20.5-b03, mixed mode Memory Startup Parameters: MEMORY_OPTIONS="-Xms1024m -Xmx1024m -Xss192k -XX:MaxGCPauseMillis=500 -XX:+HeapDumpOnOutOfMemoryError -XX:MaxPermSize=256m -XX:+CMSClassUnloadingEnabled" Monitoring – Wily monitoring is available in all the production servers that monitors key server parameters and sends out configurable alert emails based on pre defined settings. Note: Each of the servers also has two other separate tomcat domains that run different applications Investigated area: There is no Heap Memory Leak and the GC is running fine without any issues over any period of time The current busy thread count corresponds directly to the application usage – weekends and nights have lesser no. of threads compared to business hours ThreadLocal uses a WeakReference internally. If the ThreadLocal is not strongly referenced, it will be garbage-collected, even though various threads have values stored via that ThreadLocal. Additionally, ThreadLocal values are actually stored in the Thread; if a thread dies, all of the values associated with that thread through a ThreadLocal are collected. If you have a ThreadLocal as a final class member, that's a strong reference, and it cannot be collected until the class is unloaded. But this is how any class member works, and isn't considered a memory leak. The cited problem only comes into play when the value stored in a ThreadLocal strongly references that ThreadLocal—sort of a circular reference. In this case, the value (a SimpleDateFormat), has no backwards reference to the ThreadLocal. There's no memory leak in this code. Can anyone please let me know what could be the cause of this and what to be monitored?

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >