Search Results

Search found 12077 results on 484 pages for 'node js'.

Page 348/484 | < Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >

  • Can't connect to public WiFi with MacBookPro at coffee shops and libraries

    - by Nathan Bowers
    The Problem: I can't connect to public, unencrypted WiFi at my local public library or Peets Coffee. My Setup: Late 2006 MacBookPro running 10.5.8. I have Parallels installed. It's supposed to work like this: 1) Connect to their unencrypted WiFi network 2) Open a browser which redirects you to their "enter password/agree to terms" page. 3) Browse normally. I can connect to the WiFi network, but when I try to authenticate I always get stuck in a redirect loop. It's been like this for a while. Even before I upgraded to 10.5.8. I never have trouble with encrypted networks or regular open WiFi. What I've tried: Disabling Parallels connections in Network Prefs. Superstition: somehow Parallels installed something in the network stack that's messing me up. Pinging the IP address of the WiFi node I'm connected to. I can ping it, it's there, but I still get stuck in this authentication redirect loop. Tried different browsers, tried different cookie and security settings. Even tried IE under Parallels. No dice. Tried flushing DNS cache. Asked library and coffee employees for help. It didn't go well. My Question: Anybody else have this problem? What should I be looking for?

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • Squid proxy in cent os often disconnected with error : tunnelConnectTimeout(): tunnelState->servers is NULL

    - by Ela
    I am having very often internet disconnection problem with Squid proxy service. My server config; OS: CentOS release 6.3 (Final) model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz cpu MHz : 1600.000 My Local systems IP range:192.168.2.x Server IP: 192.168.2.11 Also this server is configured with lamp for development,Samba SMB file service manager and No svn currently. So i see maximum possibility is this squid proxy since this is where it stops to connect and am sure when i restart the server net started working so something wrong with this squid service only. And this server is connected with local 14 other windows machines and basically serves as a central development node. I am able to resolve it by restarting the server fully some time or sometimes by restarting the squid proxy which is totally killing our development. I have attached my cache log file here for the error info. Cache log file Sample error log: 2013/07/01 13:25:38| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:41| tunnelConnectTimeout(): tunnelState->servers is NULL 2013/07/01 13:25:50| clientProcessRequest: Invalid Request 2013/07/01 13:26:05| tunnelConnectTimeout(): tunnelState->servers is NULL Some help can make our lives easier, Thanks in advance.

    Read the article

  • DMG mounting warning message says "it may make computer less secure or cause other problems"

    - by Cawas
    When I try to open a DMG file I get this: I'll just transcript the image: There may be a problem with this disk image. Are you sure you want to open it? Opening this disk image may make your computer less secure or cause other problems. What does that mean in fact? What's really wrong with it, and what kind of problem can it cause just by mounting? Someone said: When you download a file in Leopard (and Snow Leopard), it's marked as a quarantined file. This occurs by the OS adding an attribute to the file, tagging where it came from (such as "downloaded by Safari"). This is what causes the user to see prompts when running files that were downloaded from the Internet, you may remember being asked to confirm you'd like to launch program XXX downloaded by Safari on XXX date. As a new part of Snow Leopard, files which are tagged with the quarantine attribute also have integrity checked by fsck, and if that verify fails you will see the message you described, triggered by an unused node in the disc image. But really, I didn't get that. What's quarantine? I've just downloaded a file here on SL, tried to open, and got that warning. Apple have a say about quarantine files, and they seem to work the same on Leopards. Plus I have got that file using Google Chrome while that feature seems to work just with Safari.

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Virtual machines interconnection inside Proxmox 2.1 Cluster

    - by Anton
    We have 3 physical servers (each with 1 NIC) in different datacentres, all of them are interconnected by openvpn bridged private network (10.x.x.x). Inside this network we have fully functional 3 nodes Proxmox 2.1 cluster. So, actually question is: Is there any "proper" way to make "global" local network (172.16.x.x) for all VMs inside cluster, so even if we move VM from one node to other we could reach it by static IP regardless of it's physical location? BTW, we can't add dedicated NIC to each server. Thanks in advance. EDIT: I have tried to make a separate openvpn bridge for 172.16.x.x, now I have at each server two interfaces: SRV1: openvpnbr1 - 172.16.13.1 vmbr0 - 172.16.1.1 SRV2: openvpnbr1 - 172.16.13.2 vmbr0 - 172.16.2.1 But now there is no connection between those ifaces: SRV1: ping 172.16.13.2 From 172.16.1.1 icmp_seq=2 Destination Host Unreachable SRV2: ping 172.16.13.1 From 172.16.2.1 icmp_seq=2 Destination Host Unreachable If I shut down vmbr0 interfaces, so there is connection between servers over openvpn, but vmbr0 is used by Proxmox... Where I am wrong?

    Read the article

  • Open Source Chef Server can't upload cookbook

    - by veilig
    I just setup the open source chef server on an Ubuntu 12.04 EC2 instance, I've setup my webui and am able to get responses from my knife commands ie: knife node list, knife client list, knife user list, etc... I'm able to update roles, databags, environments, etc... but I cannot upload any cookbooks. I'm running my workstation on Mac OSX. I keep getting this output at the end of my command knife cookbook upload -VV curl. Doesn't matter what cookbook I upload, or if I upload them all - I keep getting the same response DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::RemoteRequestID#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Authenticator#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::Decompressor#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::CookieManager#handle_response DEBUG: Chef::HTTP calling Chef::HTTP::JSONToModelOutput#handle_response /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http/json_output.rb:51:in `handle_response': undefined method `chomp' for nil:NilClass (NoMethodError) from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:229:in `block in apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `each' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `inject' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:227:in `apply_response_middleware' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:144:in `request' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/http.rb:118:in `put' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/cookbook_uploader.rb:123:in `block in uploader_function_for' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `call' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:52:in `block (3 levels) in process' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `loop' from /usr/local/lib/ruby/gems/2.1.0/gems/chef-12.0.0.alpha.1/lib/chef/util/threaded_job_queue.rb:50:in `block (2 levels) in process'INFO: HTTP Request Returned 204 No Content:

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • Is it possible to stop the "Beep" device on all computers in a Windows network?

    - by sharptooth
    On Windows the default behavior it to make an annoying "beep" sound every time Windows things something notable happened. The result is that when someone send a company-wide email via the MS Exchange all computers around my cubicle beep one by one. This is annoying and makes no sense. Luckily beeping can be shut off. Someone has to: open the "Device Manager", select "View - Show hidden devices", find the "Beep" device in "Non-Pug and Play Devices" node, open its properties, go to the "Driver" tab, set "startup type" to "Disabled" and click "Stop". The "Beep" device will stop and no longer produce the useless sound. This solution however requires tracking every computer and then talking to its user which is not very convenient. Device Manager doesn't allow stopping a device on another computer. I'm looking for a solution that can be deployed by the administrators team. We have a domain and the administrators even install the programs company-wide automatically. Are there any means to stop the "Beep" device an all computers in the Windows network with some remote-administration features automatically?

    Read the article

  • iis 7.5 - WFF and ARR farm management

    - by smackaysmith
    We have two test web farms (IIS 7.5). The Florida web farm has two ARR servers and two content servers. The ARR servers have WFF and NLB installed. The ARR setup uses a shared config located on a file share. The content servers do not have WFF installed. There is one web farm, and it's managed on an ARR server. The Illinois web farm also has two ARR servers and two content servers. ARR servers have WFF and NLB installed, and they use a shared config located on a share. One of the content servers has WFF installed, which makes it the controller; it's also the primary content server. Apparently, Illinois isn't properly configured. From what we've pieced together from various IIS.net articles and this post (http://ruslany.net/2010/07/web-farm-framework-2-0-overview/), the controller should be one of the ARR servers (like our Florida setup). The thing is Florida's controller doesn't have a Primary server nor can you set one of the content servers as Primary. It doesn't have the management piece showing the Trace messages when you click the Servers node (from iis console, Server Farms/FLFarm/Servers http://ruslany.net/wp-content/uploads/2010/07/WebFarm8.png). That management piece does exist in the Illinois farm, but that's a bad configuration. What are we missing that our Florida configuration doesn't have the Primary and Secondary content servers, and the management piece? I have looked for IIS role differences, but there are none.

    Read the article

  • OpenVPN, install a TAP adapter

    - by GolezTrol
    When I try to connect to my work VPN using OpenVPN, the connection fails with the message: All TAP-Win32 adapters on this system are currently in use. Many sources suggest to look in Control Panel\Network and Internet\Network Connections an enable the TAP adapter, but when I look there, there is none. Now I've run addtap.bat which is provided with OpenVPN, but I still don't get to see any TAP adapter, and logging in in VPN still fails. The output of addtap.bat is C:\Windows\system32>"C:\Program Files (x86)\OpenVPN\bin\tapinstall.exe" install "C:\Program Files (x86)\OpenVPN\driver\OemWin2k.inf" tap0801 Device node created. Install is complete when drivers are updated... Updating drivers for tap0801 from C:\Program Files (x86)\OpenVPN\driver\OemWin2k .inf. Drivers updated successfully. I've Run As Administrator both the setup of OpenVPN and addtap.bat. I've run deltapall.bat to remove any (maybe hidden) adapters. It said it removed three of them, after which I ran addtap.bat again to try to create another one. I also run OpenVPN itself as administrator. What's wrong? Running Windows 7 Home Premium on a HP Pavilion dv7 4050ed. It has worked before, but I recently had to reinstall my laptop, for which I used the restore disks I created when I just got it. Everything else seems to work fine. == UPDATE == The TAP adapter is found in Device Manager, but apparently it is disabled because it is incompatible with Windows 7 64bit. I've deïnstalled OpenVPNGui, downloaded a version that should be 64bit compatible, and installed that. Still no cigar. Then I found a tip to install OpenVPN (version 9) after installing OpenVPNGui, because that installs OpenVPN version 8. Now I got a v9 TAP driver in Device Manager, but it still doesn't work and shows up in device manager with an exclamation mark, and not at all in my network devices.

    Read the article

  • how does a computer know which IP address will route information to the internet? [closed]

    - by JohnMerlino
    Possible Duplicate: How does IPv4 Subnetting Work? For example, I have a computer with a Network Inteface Card (NIC) which is an Ethernet card that is connected by Ethernet cables to a router. There is also another computer with a cable that is connected in another port of the router. This is a Belkin router operating over an Ethernet in the LAN. When I connect to serverfault.com, it maps to an IP address. My computer now has a task of connecting to that IP address. But my computer itself cannot connect to the serverfault IP address. Only the router can. So the task of my computer is to find the IP address associated with the node that will do the routing to the public internet. How does my computer know that a particular IP address in the local network belongs to the router, and is not another computer connected to the network? Is this information configured manually in the operating system itself? Somehow my computer must know that it must send ethernet frames to the router with the expectation that the router will then send the packet to a public IP. How does it know to send it to the router? Is the router's ip address stored in my computer like a key/value pair e.g. "router"="192.168.2.6", so that when I put a public ip address, my computer first knows to connect to 192.168.2.6?

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • ESX Scheduler and NUMA issue

    - by babyg_wc
    On our 24 core bl685 (4sockets x 6core), we find that NUMA nodes 0 and 1 are pretty busy (unfortunately resulting in elevated cpu ready times on the VMS), whilst NUMA nodes 2 and 3 are almost unused. I thought this just maybe a ESX4 U1 issue, so I had a colleague with a 32 core (dl785) farm investigate, and it seems that his last 3 or 4 NUMA nodes are also not really being utilised. ESX seems to have a weakness when it comes to balancing lightly loaded NUMA boxes, Im going to enabled node interleaving in the BIOS and see if the scheduler balancers across all 24 cores, instead of just 12!... For those of you with large core counts, I would suggest you fire up you viclient, and check Physical CPU useage (or esxtop), I would be interested to hear what your results are. Please note, that its only the lightly loaded (eg less than 30% cpu load on the esx host) that seems to have the biggest issue with load imbalance. Thoughts/comments. PS ive logged a SR with vmware to assist, also the other "problem" could be that we have 128gb of ram in each host, and therefore the scheduler sees no good reason why it shouldnt try and cram all vms's into the first two NUMA nodes, as we only have around 50gb of ram worth of vms on each host...

    Read the article

  • DL380 G7: Not able to access ILO on DL380 via ssh from a client

    - by user117140
    I have problem where I can't access my ILO(ssh to ILO IP) thru client which is in different network.I am able to ping ILO IP thru this clinet but ssh access is not possible. Is it possible to have ssh to ILO IP from a client which is in different network? FYI, from the same client I can do ssh to server application IP but ssh to this server ILO IP is not possible. Kindly help? Some more info added: ILO IP address is 10.247.172.70 and its VLAN is different than Client VLAN. Client IP address is 10.247.167.80. ping to ILO IP from this client is possible but not ssh. I can do ssh to ILO IP if I try to do it from the server(hostname:node1) having ILO port or from the other node of this cluster itself,So ssh login is enabled. [root@node1 ~]$ssh -v 10.247.173.70 OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 10.247.173.70 [10.247.173.70] port 22. [root@node1 ~]$ping 10.247.173.70 PING 10.247.173.70 (10.247.173.70) 56(84) bytes of data. 64 bytes from 10.247.173.70: icmp_seq=1 ttl=254 time=0.283 ms 64 bytes from 10.247.173.70: icmp_seq=2 ttl=254 time=0.344 ms 64 bytes from 10.247.173.70: icmp_seq=3 ttl=254 time=0.324 ms 64 bytes from 10.247.173.70: icmp_seq=4 ttl=254 time=0.367 ms

    Read the article

  • Successful login with iscsiadm on target still doesn't create block device

    - by Halfgaar
    I've set up an experiment to test iscsitarget and initiator, which at some point worked. Later, I turned the setup back on and much to my dismay, the initiator machine stopped making block devices for its successful logins. As far as I know, I haven't changed anything on either machine. Some details: # iscsiadm -m node --login Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260] Logging in to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260] Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun1, portal: 10.0.0.1,3260]: successful Login to [iface: default, target: iqn.2010-12.nl.ytec.arbiter:arbiter.lun2, portal: 10.0.0.1,3260]: successful Sessions: # iscsiadm -m session tcp: [3] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun1 tcp: [4] 10.0.0.1:3260,1 iqn.2010-12.nl.ytec.arbiter:arbiter.lun2 Netstat: # netstat -n -p|grep 3260 tcp 0 0 10.0.0.2:48719 10.0.0.1:3260 ESTABLISHED 1078/iscsid tcp 0 0 10.0.0.2:48718 10.0.0.1:3260 ESTABLISHED 1078/iscsid /var/log/syslog doesn't give errors: Jan 27 11:41:49 vmnode001 kernel: [ 378.041749] scsi7 : iSCSI Initiator over TCP/IP Jan 27 11:41:49 vmnode001 kernel: [ 378.044180] scsi8 : iSCSI Initiator over TCP/IP lsscsi doesn't show my devices: [0:0:1:0] cd/dvd TSSTcorp DVD-ROM TS-L333A D100 /dev/sr0 [4:0:0:0] disk ATA Hitachi HUA72105 A74A - [4:0:1:0] disk ATA Hitachi HUA72105 A74A - [4:1:0:0] disk Dell VIRTUAL DISK 1028 /dev/sda And there are no block devices in /dev for it: # ls -1 /dev/sd* /dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sda4 I tried loading all scsi kernel modules I could find, but that doesn't seem to be the problem. I reall don't get this; it used to work. I found people with similar problems (here and here) but no solution. Initiator is Debian Sqeeuze (testing), target is Debian Lenny (stable). iscsitarget is 0.4.16+svn162-3.1+lenny1, open-iscsi (initiator) is 2.0.871.3-2squeeze1. Target kernel: 2.6.26-2-amd64, initiator kernel: 2.6.32-5-amd64

    Read the article

  • How to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Can't make nodejs mingw32: pkg-config can't find gnutils

    - by valya
    I'm trying to compile nodejs using MSYS, mingw32 on Windows 7-64 Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ ./configure Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\CL.exe Checking for program CL : ok C:\Program Files (x86)\Microsoft V isual Studio 10.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\x86_amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program CL : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\CL.exe Checking for program LINK : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LINK.exe Checking for program LIB : ok c:\Program Files (x86)\Microsoft V isual Studio 9.0\VC\BIN\amd64\LIB.exe Checking for program MT : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\MT.exe Checking for program RC : ok C:\Program Files\\Microsoft SDKs\W indows\v6.0A\bin\x64\RC.exe Checking for msvc : ok Checking for msvc : ok Checking for library dl : not found Checking for library execinfo : not found Checking for gnutls >= 2.5.0 : fail --- libeio --- Checking for library pthread : not found Checking for function pthread_create : not found error: the configuration failed (see 'C:\\msys\\1.0\\home\\Valentin_Golev\\node js\\build\\config.log') I have gnutils built and installed! I've checked the config.log, and there was a command: pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls I typed it in the console Valentin Golev@VALYASNOTEBOOK /home/Valentin_Golev/nodejs $ pkg-config --errors-to-stdout --print-errors --atleast-version=2.5.0 gnutls Package gnutls was not found in the pkg-config search path. Perhaps you should add the directory containing `gnutls.pc' to the PKG_CONFIG_PATH environment variable No package 'gnutls' found But, Valentin Golev@VALYASNOTEBOOK ~ $ $PKG_CONFIG_PATH sh: c:/msys/1.0/local/lib/pkgconfig: is a directory Valentin Golev@VALYASNOTEBOOK ~ $ cd $PKG_CONFIG_PATH Valentin Golev@VALYASNOTEBOOK /local/lib/pkgconfig $ ls gnutls-extra.pc gnutls.pc What am I doing wrong?

    Read the article

  • What is the quickest reliable way to backup a NAS drive to a USB drive?

    - by Tim Murphy
    How would you backup 600+ GB of data on a NAS (Network-Attached Storage) drive to a USB external drive? The NAS drive does not contain mission critical data nonetheless I wish to make weekly copies of it just in case. The NAS drive is almost exclusively used as an archive dump and is rarely updated. However the backup strategy used must have a simple restore procedure so I can confidently say the data now on the NAS drive is exactly how it was at the time of backup. I did try xcopy but seemed like it would take many-many hours and eventually crashed with insufficient memory. http://www.ctunion.com/node/114 suggests I would need to use xxcopy instead due to folder/file name lengths. My concern with xcopy/xxcopy is the length of time it takes. Hoping something else is faster. NAS drive is DLink DNS-313. 1TB drive installed. Connected to router via Ethernet cable. USB drive is Seagate 1TB. Can be connected to Windows Vista (preferred) or Windows 7 PCs. Both PCs are usually connected Wirelessly however ethernet cable can be used during backup to speed up the process.

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Disabling Keyboard Wakeup for Ubuntu 10.04 on Acer 1810TZ

    - by sybreon
    My Acer Aspire 1810TZ laptop suspends fine but wakes up on any slight key-press. I would like to disable this behaviour. I read that it involves disabling something in the /proc/acpi/wakeup but SLPB does not seem to be listed at all. root@1810TZ:/etc# cat /proc/acpi/wakeup Device S-state Status Sysfs node UHC0 S3 disabled pci:0000:00:1d.0 UHC1 S3 disabled pci:0000:00:1d.1 UHC2 S3 disabled pci:0000:00:1d.2 UHCR S3 disabled EHC1 S3 disabled pci:0000:00:1d.7 UHC3 S3 disabled pci:0000:00:1a.0 UHC4 S3 disabled UHC5 S3 disabled EHC2 S3 disabled pci:0000:00:1a.7 EXP1 S4 disabled pci:0000:00:1c.0 PXSX S4 disabled pci:0000:01:00.0 EXP2 S4 disabled PXSX S4 disabled EXP3 S4 disabled PXSX S4 disabled EXP4 S4 disabled pci:0000:00:1c.3 PXSX S4 disabled pci:0000:02:00.0 EXP5 S4 disabled PXSX S4 disabled EXP6 S4 disabled PXSX S4 disabled However, the relevant bits seem to be detected from dmesg. [ 0.357628] ACPI: AC Adapter [ACAD] (on-line) [ 0.357749] input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 [ 0.357754] ACPI: Power Button [PWRB] [ 0.357817] input: Lid Switch as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0D:00/input/input1 [ 0.359319] ACPI: Lid Switch [LID0] [ 0.359390] input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 [ 0.359394] ACPI: Sleep Button [SLPB] [ 0.359475] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 [ 0.359479] ACPI: Power Button [PWRF] Not quite sure what to do next.

    Read the article

  • Munin graphing by CGI

    - by Vaughn Hawk
    I have Munin working just fine, but any time I try to do cgi graphing - it just stops graphing... no errors in the log, nothing. I've followed the instructions here: http://munin-monitoring.org/wiki/CgiHowto - and it should be working - here's my munin.conf setup, at least the parts that matter: dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin tmpldir /etc/munin/templates graph_strategy cgi cgiurl /usr/lib/cgi-bin cgiurl_graph /cgi-bin/munin-cgi-graph And then the host info yada yada - graph_strategy cgi and cgrurl are commented out in munin.conf - that's because if I uncomment them, graphing stops working. Again, I get no errors in logs, just blank images where the graphs used to be. Comment out cgi? As soon as munin html runs again, everything is back to normal. I'm running the latest version of munin and munin-node - I've tried fastcgi and regular cgi - permissions for all of the directories involved are munin:www-data - and my httpd.conf file looks like this: ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory /usr/lib/cgi-bin/> AllowOverride None SetHandler fastcgi-script Options ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> <Location /cgi-bin/munin-cgi-graph> SetHandler fastcgi-script </Location> Does anyone have any ideas? Without this working, at least from what I understand, Munin just graphs stuff, even if no one is looking at them - you add 100 servers to graph, and this starts to become a problem. Hope someone has ran into this and can help me out. Thanks!

    Read the article

< Previous Page | 344 345 346 347 348 349 350 351 352 353 354 355  | Next Page >