Search Results

Search found 21130 results on 846 pages for 'nfs client'.

Page 12/846 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How to get more NFS packet details from Wireshark?

    - by Joe Swanson
    How can I get Wireshark to give me details about NFS packets at this level of granularity? (as exemplified here here) Specifically, I am interesting in looking at the the "Stable" option toward the bottom. When I analyze captured packets (whether by capturing directly via Wireshark, importing from a tshark dump, or importing from a tcpdump dump), I do not see a "Network File System" section in the packet details. I only get general TCP information. It recognizes that a packet is destined for a NFS port, but I am not able to see these details. Any ideas?

    Read the article

  • Regular Windows 7 BSOD with Shrew VPN client

    - by Junto
    The Shrew VPN client appears to be a good alternative to the Cisco VPN Client on x64 Windows 7. However, since installing it I've seen fairly regular BSODs. Minidump attached: Microsoft (R) Windows Debugger Version 6.11.0001.404 AMD64 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [D:\Temp\020810-23431-01.dmp] Mini Kernel Dump File: Only registers and stack trace are available Symbol search path is: SRV*d:\symbols*http://msdl.microsoft.com/download/symbols Executable search path is: Windows 7 Kernel Version 7600 MP (8 procs) Free x64 Product: WinNt, suite: TerminalServer SingleUserTS Built by: 7600.16385.amd64fre.win7_rtm.090713-1255 Machine Name: Kernel base = 0xfffff800`0285f000 PsLoadedModuleList = 0xfffff800`02a9ce50 Debug session time: Mon Feb 8 18:08:12.887 2010 (GMT+1) System Uptime: 0 days 7:52:06.120 Loading Kernel Symbols ............................................................... ................................................................ .............. Loading User Symbols Loading unloaded module list .... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck A, {0, 2, 0, fffff800028d50b6} Unable to load image \SystemRoot\system32\DRIVERS\vfilter.sys, Win32 error 0n2 *** WARNING: Unable to verify timestamp for vfilter.sys *** ERROR: Module load completed but symbols could not be loaded for vfilter.sys Probably caused by : vfilter.sys ( vfilter+29a6 ) Followup: MachineOwner Machine is a brand spanking new Dell Precision T5500. Superuser appears to have several recommendations for the Shrew VPN Client as an alternative to the Cisco VPN client on 64 bit machines, so I wondered if anyone here has seen this problem and possibly found a solution to the problem? I've decided to run the VPN client under Windows 7 compatibility mode for the moment (Vista SP2) with administrator privileges to see if it makes a difference. Oddly, the VPN doesn't necessarily need to be connected. I've noticed it when browsing using Google Chrome and Internet Explorer, usually when I open up a new tab. If it carries on I think I'll be forced to shell out the 120 EUR for the NCP Client instead.

    Read the article

  • PPTP ping client to client error

    - by Linux Intel
    I installed pptp server on a centos 6 64bit server PPTP Server ip : 55.66.77.10 PPTP Local ip : 10.0.0.1 Client1 IP : 10.0.0.60 centos 5 64bit Client2 IP : 10.0.0.61 centos5 64bit PPTP Server can ping Client1 And client 1 can ping PPTP Server PPTP Server can ping Client2 And client 2 can ping PPTP Server The problem is client 1 can not ping Client 2 route -n on PPTP Server Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.60 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 10.0.0.61 0.0.0.0 255.255.255.255 UH 0 0 0 ppp1 55.66.77.10 0.0.0.0 255.255.255.248 U 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 55.66.77.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 1 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 70.14.13.19 255.255.255.255 UGH 0 0 0 eth0 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 70.14.13.19 0.0.0.0 UG 0 0 0 eth0 route -n On Client 2 Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 55.66.77.10 84.56.120.60 255.255.255.255 UGH 0 0 0 eth1 10.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0 0.0.0.0 84.56.120.60 0.0.0.0 UG 0 0 0 eth1 cat /etc/ppp/options.pptpd on PPTP server ############################################################################### # $Id: options.pptpd,v 1.11 2005/12/29 01:21:09 quozl Exp $ # # Sample Poptop PPP options file /etc/ppp/options.pptpd # Options used by PPP when a connection arrives from a client. # This file is pointed to by /etc/pptpd.conf option keyword. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 and the kernel MPPE module. ############################################################################### # Authentication # Name of the local system for authentication purposes # (must match the second field in /etc/ppp/chap-secrets entries) name pptpd # Strip the domain prefix from the username before authentication. # (applies if you use pppd with chapms-strip-domain patch) #chapms-strip-domain # Encryption # (There have been multiple versions of PPP with encryption support, # choose with of the following sections you will use.) # BSD licensed ppp-2.4.2 upstream with MPPE only, kernel module ppp_mppe.o # {{{ refuse-pap refuse-chap refuse-mschap # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. require-mschap-v2 # Require MPPE 128-bit encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) require-mppe-128 # }}} # OpenSSL licensed ppp-2.4.1 fork with MPPE only, kernel module mppe.o # {{{ #-chap #-chapms # Require the peer to authenticate itself using MS-CHAPv2 [Microsoft # Challenge Handshake Authentication Protocol, Version 2] authentication. #+chapms-v2 # Require MPPE encryption # (note that MPPE requires the use of MSCHAP-V2 during authentication) #mppe-40 # enable either 40-bit or 128-bit, not both #mppe-128 #mppe-stateless # }}} # Network and Routing # If pppd is acting as a server for Microsoft Windows clients, this # option allows pppd to supply one or two DNS (Domain Name Server) # addresses to the clients. The first instance of this option # specifies the primary DNS address; the second instance (if given) # specifies the secondary DNS address. #ms-dns 10.0.0.1 #ms-dns 10.0.0.2 # If pppd is acting as a server for Microsoft Windows or "Samba" # clients, this option allows pppd to supply one or two WINS (Windows # Internet Name Services) server addresses to the clients. The first # instance of this option specifies the primary WINS address; the # second instance (if given) specifies the secondary WINS address. #ms-wins 10.0.0.3 #ms-wins 10.0.0.4 # Add an entry to this system's ARP [Address Resolution Protocol] # table with the IP address of the peer and the Ethernet address of this # system. This will have the effect of making the peer appear to other # systems to be on the local ethernet. # (you do not need this if your PPTP server is responsible for routing # packets to the clients -- James Cameron) proxyarp # Normally pptpd passes the IP address to pppd, but if pptpd has been # given the delegate option in pptpd.conf or the --delegate command line # option, then pppd will use chap-secrets or radius to allocate the # client IP address. The default local IP address used at the server # end is often the same as the address of the server. To override this, # specify the local IP address here. # (you must not use this unless you have used the delegate option) #10.8.0.100 # Logging # Enable connection debugging facilities. # (see your syslog configuration for where pppd sends to) debug # Print out all the option values which have been set. # (often requested by mailing list to verify options) #dump # Miscellaneous # Create a UUCP-style lock file for the pseudo-tty to ensure exclusive # access. lock # Disable BSD-Compress compression nobsdcomp # Disable Van Jacobson compression # (needed on some networks with Windows 9x/ME/XP clients, see posting to # poptop-server on 14th April 2005 by Pawel Pokrywka and followups, # http://marc.theaimsgroup.com/?t=111343175400006&r=1&w=2 ) novj novjccomp # turn off logging to stderr, since this may be redirected to pptpd, # which may trigger a loopback nologfd # put plugins here # (putting them higher up may cause them to sent messages to the pty) cat /etc/ppp/options.pptp on Client1 and Client2 ############################################################################### # $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $ # # Sample PPTP PPP options file /etc/ppp/options.pptp # Options used by PPP when a connection is made by a PPTP client. # This file can be referred to by an /etc/ppp/peers file for the tunnel. # Changes are effective on the next connection. See "man pppd". # # You are expected to change this file to suit your system. As # packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/ # and the kernel MPPE module available from the CVS repository also on # http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe. ############################################################################### # Lock the port lock # Authentication # We don't need the tunnel server to authenticate itself noauth # We won't do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2 # (you may need to remove these refusals if the server is not using MPPE) refuse-pap refuse-eap refuse-chap refuse-mschap # Compression # Turn off compression protocols we know won't be used nobsdcomp nodeflate # Encryption # (There have been multiple versions of PPP with encryption support, # choose which of the following sections you will use. Note that MPPE # requires the use of MSCHAP-V2 during authentication) # # Note that using PPTP with MPPE and MSCHAP-V2 should be considered # insecure: # http://marc.info/?l=pptpclient-devel&m=134372640219039&w=2 # https://github.com/moxie0/chapcrack/blob/master/README.md # http://technet.microsoft.com/en-us/security/advisory/2743314 # http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras # ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o # If the kernel is booted in FIPS mode (fips=1), the ppp_mppe.ko module # is not allowed and PPTP-MPPE is not available. # {{{ # Require MPPE 128-bit encryption #require-mppe-128 # }}} # http://mppe-mppc.alphacron.de/ fork from PPP project by Jan Dubiec # ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o # {{{ # Require MPPE 128-bit encryption #mppe required,stateless # }}} IPtables are stopped on clients and server, Also net.ipv4.ip_forward = 1 is enabled on PPTP Server. How can i solve this problem .?

    Read the article

  • What's the best way to mitigate NFS and sudo?

    - by user225874
    Quick background: We have 40 workstations running Linux. NFS is used extensively for bulk data storage and home directories. This allows users to roam freely will relatively transparent file systems. This is an educational environment where postdocs and students have successfully pulled off a coup of sorts. All have gained root on their individual workstations by grooming a technophobic PI who thinks IT people are evil. If I so much as suggest chroot or sudo restrictions, I'll find myself working out of a broom closet. With that in mind, what's the best way to mitigate something like this below? $ hostname workstation1 $ whoami john $ sudo su jane $ whoami jane $ cp -R /home/nfs/jane /mnt/thumbdrive/

    Read the article

  • Rsync over NFS with QoS: How to view real transfer speed?

    - by Ian Mackinnon
    We have a bandwidth limit between a Linux server and a NAS, created using 'tc' with an IP filter. When writing to an NFS mount of the NAS, rsync claims a very high transfer speed for each file and then waits a long time before acknowledging that everything has finished. The total time taken is consistent with the QoS limit and the time taken by the same transfer over FTP. Why does the write to the NFS mount report higher transfer speeds than are actually happening over the network? How can I monitor the actual bandwidth of the transfer?

    Read the article

  • Access VirtualBox client (WinXP) from host (Linux) when client is connected to VPN

    - by hsz
    Hello ! I have a host (Ubuntu Linux) with VirtualBox on which is installed client (Windows XP). I set bridge connection for them. Host has IP 192.168.0.102 and client 192.168.0.103. On client I've installed WAMP server and on host I can access it by simply call 192.168.0.103. When I connect on client to the Cisco VPN (need access to database over VPN) I cannot access that server from host. What should I do to make it work ?

    Read the article

  • Can't `mount -o loop` an ISO from an NFS share (RHEL)

    - by warren
    I have a large NFS share with a variety of software ISOs on it. I've only tried this on Red Hat Enterprise Linux, but when trying to do the following, the mount comes back with an error indicating no permissions to mount. Why would this be happening? NFS is mounted thusly: mediaserver:path/to/isos /media/nfs This is the mount call that fails mount -o loop /media/nfs/product.iso /tmp/product If I copy the ISO, there is no issue. The NFS share is mounted rw. How can I loop mount the ISO from the NFS share without copying it first?

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • nfs mountpoint named ``share'' breaks ls and man

    - by freddyb
    I mounted a nfs server to ~/share. This works fine as long as I'm at home, where the nfs share is in reach. Whenver I'm not, this seems to break access to all manpages. Using man (or ls in my homedir) waits forever. Checking with strace reveals that they try to access the folder called share. Unmounting fails too. Even with -l (lazy) and -f (force). I am asking for three things here: Is ``share'' a magic name? Does something like MANPATH exist, which I should avoid? How do I unmount without rebooting? (I already commented the share out in fstab) What would you suggest me to do, to have network/position based mounting of NFS shares?

    Read the article

  • How do I configure a C# web service client to send HTTP request header and body in parallel?

    - by Christopher
    Hi, I am using a traditional C# web service client generated in VS2008 .Net 3.5, inheriting from SoapHttpClientProtocol. This is connecting to a remote web service written in Java. All configuration is done in code during client initialization, and can be seen below: ServicePointManager.Expect100Continue = false; ServicePointManager.DefaultConnectionLimit = 10; var client = new APIService { EnableDecompression = true, Url = _url + "?guid=" + Guid.NewGuid(), Credentials = new NetworkCredential(user, password, null), PreAuthenticate = true, Timeout = 5000 // 5 sec }; It all works fine, but the time taken to execute the simplest method call is almost double the network ping time. Whereas a Java test client takes roughly the same as the network ping time: C# client ~ 550ms Java client ~ 340ms Network ping ~ 300ms After analyzing the TCP traffic for a session discovered the following: Basically, the C# client sent TCP packets in the following sequence. Client Send HTTP Headers in one packet. Client Waits For TCP ACK from server. Client Sends HTTP Body in one packet. Client Waits For TCP ACK from server. The Java client sent TCP packets in the following sequence. Client Sends HTTP Headers in one packet. Client Sends HTTP Body in one packet. Client Revieves ACK for first packet. Client Revieves ACK for second packet. Client Revieves ACK for second packet. Is there anyway to configure the C# web service client to send the header/body in parallel as the Java client appears to? Any help or pointers much appreciated.

    Read the article

  • Proxied calls not working as expected

    - by AndyH
    I have been modifying an application to have a cleaner client/server split to allow for load splitting and resource sharing etc. Everything is written to an interface so it was easy to add a remoting layer to the interface using a proxy. Everything worked fine. The next phase was to add a caching layer to the interface and again this worked fine and speed was improved but not as much as I would have expected. On inspection it became very clear what was going on. I feel sure that this behavior has been seen many times before and there is probably a design pattern to solve the problem but it eludes me and I'm not even sure how to describe it. It is easiest explained with an example. Let's imagine the interface is interface IMyCode { List<IThing> getLots( List<String> ); IThing getOne( String id ); } The getLots() method calls getOne() and fills up the list before returning. The interface is implemented at the client which is proxied to a remoting client which then calls the remoting server which in turn calls the implementation at the server. At the client and the server layers there is also a cache. So we have :- Client interface | Client cache | Remote client | Remote server | Server cache | Server interface If we call getOne("A") at the client interface, the call is passed to the client cache which faults. This then calls the remote client which passes the call to the remote server. This then calls the server cache which also faults and so the call is eventually passed to the server interface which actually gets the IThing. In turn the server cache is filled and finally the client cache also. If getOne("A") is again called at the client interface the client cache has the data and it gets returned immediately. If a second client called getOne("B") it would fill the server cache with "B" as well as it's own client cache. Then, when the first client calls getOne("B") the client cache faults but the server cache has the data. This is all as one would expect and works well. Now lets call getLots( [ "C", "D" ] ). This works as you would expect by calling getOne() twice but there is a subtlety here. The call to getLots() cannot directly make use of the cache. Therefore the sequence is to call the client interface which in turn calls the remote client, then the remote server and eventually the server interface. This then calls getOne() to fill the list before returning. The problem is that the getOne() calls are being satisfied at the server when ideally they should be satisfied at the client. If you imagine that the client/server link is really slow then it becomes clear why the client call is more efficient than the server call once the client cache has the data. This example is contrived to illustrate the point. The more general problem is that you cannot just keep adding proxied layers to an interface and expect it to work as you would imagine. As soon as the call goes 'through' the proxy any subsequent calls are on the proxied side rather than 'self' side. Have I failed to learn or not learned something correctly? All this is implemented in Java and I haven't used EJBs. It seems that the example may be confusing. The problem is nothing to do with cache efficiencies. It is more to do with an illusion created by the use of proxies or AOP techniques in general. When you have an object whose class implements an interface there is an assumption that a call on that object might make further calls on that same object. For example, public String getInternalString() { return InetAddress.getLocalHost().toString(); } public String getString() { return getInternalString(); } If you get an object and call getString() the result depends where the code is running. If you add a remoting proxy to the class then the result could be different for calls to getString() and getInternalString() on the same object. This is because the initial call gets 'deproxied' before the actual method is called. I find this not only confusing but I wonder how I can control this behavior especially as the use of the proxy may be by a third party. The concept is fine but the practice is certainly not what I expected. Have I missed the point somewhere?

    Read the article

  • Node.js Lockstep Multiplayer Architecture

    - by Wakaka
    Background I'm using the lockstep model for a multiplayer Node.js/Socket.IO game in a client-server architecture. User input (mouse or keypress) is parsed into commands like 'attack' and 'move' on the client, which are sent to the server and scheduled to be executed on a certain tick. This is in contrast to sending state data to clients, which I don't wish to use due to bandwidth issues. Each tick, the server will send the list of commands on that tick (possibly empty) to each client. The server and all clients will then process the commands and simulate that tick in exactly the same way. With Node.js this is actually quite simple due to possibility of code sharing between server and client. I'll just put the deterministic simulator in the /shared folder which can be run by both server and client. The server simulation is required so that there is an authoritative version of the simulation which clients cannot alter. Problem Now, the game has many entity classes, like Unit, Item, Tree etc. Entities are created in the simulator. However, for each class, it has some methods that are shared and some that are client-specific. For instance, the Unit class has addHp method which is shared. It also has methods like getSprite (gets the image of the entity), isVisible (checks if unit can be seen by the client), onDeathInClient (does a bunch of stuff when it dies only on the client like adding announcements) and isMyUnit (quick function to check if the client owns the unit). Up till now, I have been piling all the client functions into the shared Unit class, and adding a this.game.isServer() check when necessary. For instance, when the unit dies, it will call if (!this.game.isServer()) { this.onDeathInClient(); }. This approach has worked pretty fine so far, in terms of functionality. But as the codebase grew bigger, this style of coding seems a little strange. Firstly, the client code is clearly not shared, and yet is placed under the /shared folder. Secondly, client-specific variables for each entity are also instantiated on the server entity (like unit.sprite) and can run into problems when the server cannot instantiate the variable (it doesn't have Image class like on browsers). So my question is, is there a better way to organize the client code, or is this a common way of doing things for lockstep multiplayer games? I can think of a possible workaround, but it does have its own problems. Possible workaround (with problems) I could use Javascript mixins that are only added when in a browser. Thus, in the /shared/unit.js file in the /shared folder, I would have this code at the end: if (typeof exports !== 'undefined') module.exports = Unit; else mixin(Unit, LocalUnit); Then I would have /client/localunit.js store an object LocalUnit of client-side methods for Unit. Now, I already have a publish-subscribe system in place for events in the simulator. To remove the this.game.isServer() checks, I could publish entity-specific events whenever I want the client to do something. For instance, I would do this.publish('Death') in /shared/unit.js and do this.subscribe('Death', this.onDeathInClient) in /client/localunit.js. But this would make the simulator's event listeners list on the server and the client different. Now if I want to clear all subscribed events only from the shared simulator, I can't. Of course, it is possible to create two event subscription systems - one client-specific and one shared - but now the publish() method would have to do if (!this.game.isServer()) { this.publishOnClient(event); }. All in all, the workaround off the top of my head seems pretty complicated for something as simple as separating the client and shared code. Thus, I wonder if there is an established and simpler method for better code organization, hopefully specific to Node.js games.

    Read the article

  • Do you charge a client for email and chat communication as a freelancer? [closed]

    - by skyork
    For a project that is billed by hours, should a freelancer charge the client for the amount of time he/she spends on email/chat correspondence? For example, the client sends an email to the the freelancer, outlining the requirements. Should the freelancer charge the client for the time during which he/she reads the email and writes a reply. The same goes for chat conversations for clarifying the requirements. In particular, if the freelancer's English is not very good, so that he/she spends extra time on understanding what the client wants and explaining him/herself (e.g. copying and pasting into Google Translate), should such time be charged to the client too?

    Read the article

  • How to synchronize a whole Ubuntu?

    - by Avio
    I think that the time is ripe to have my whole Ubuntu synchronized just as my Dropbox folder is. Given that we are always talking about files and directories, what's the difference between my Documents folder and my /usr system directory? Almost none, except for their location. In fact, I think that there is just one big issue that prevents people to have their beloved installations mirrored wherever they go: symlinks. Dropbox, Google Drive, Ubuntu One, Sugarsync, Skydrive, none of these services support symlinking. This means that if I push a symlink in one of the synced folders, locally the symlink is kept as is, but remotely (in the cloud or on the other synced machines) the symlink is resolved to the actual file that was originally pointed to. This completely disrupts Linux installations, thus these services can't be used for this purpose. So the question is. Does anybody knows a way to achieve this? A whole Ubuntu, always synchronized with a remote running copy, but still locally stored on both disks? My best guess is that I could use NFS. But the main difference between Dropbox and NFS is that NFS is a remote filesystem that always forces to remotely access the files, while Dropbox pushes modifcations to local filesystems (and thus would perform better). I've also heard about NFS caching. Does anybody knows if this solution could approximate Dropbox in this sense? P.s. I know that /boot, /dev, /proc, /run, /tmp and device-specific mountpoints in /mnt and /media will have to be left out the sync mechanism. What I'm interested in is the principle. Can this be done with reasonable performance, having reasonable resources (e.g. ~ 1Mbps upload bandwidth and a public IP address)?

    Read the article

  • [tcp] :/: RPCPROG_NFS: RPC: Program not registered

    - by frankcheong
    I tried to share the root / from a fedora 9 to a freeBSD while when I tried to mount the / folder it complained with "[tcp] nfs_server:/: RPCPROG_NFS: RPC: Program not registered". I followed the below steps to setup on the fedora nfs server:- Add the below line inside the /etc/exports / nfs_client(rw,no_root_squash,sync) restart the nfs related service service portmapper restart service nfslock restart service nfs restart export the filesystem using the below command:- exportfs -arv On the nfs client, I have troubleshoot using the below command:- rpcinfo -p nfs_server program vers proto port service 100000 2 tcp 111 rpcbind 100000 2 udp 111 rpcbind 100024 1 udp 32816 status 100024 1 tcp 34173 status 100011 1 udp 817 rquotad 100011 2 udp 817 rquotad 100011 1 tcp 820 rquotad 100011 2 tcp 820 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100021 1 udp 32818 nlockmgr 100021 3 udp 32818 nlockmgr 100021 4 udp 32818 nlockmgr 100005 1 udp 32819 mountd 100005 1 tcp 34174 mountd 100005 2 udp 32819 mountd 100005 2 tcp 34174 mountd 100005 3 udp 32819 mountd 100005 3 tcp 34174 mountd showmount -e nfs_client Exports list on nfs_server: / nfs_client What else did I missed?

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • Server load high, CPU idle. NFS the cause?

    - by Mech Software
    I am running into a scenario where I'm seeing a high server load (sometimes upwards of 20 or 30) and a very low CPU usage (98% idle). I'm wondering if these wait states are coming as part of an NFS filesystem connection. Here is what I see in VMStat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 2 1 0 1298784 0 0 0 0 16 5 0 9 1 1 97 2 0 0 1 0 1308016 0 0 0 0 0 0 0 3882 4 3 80 13 0 0 1 0 1307960 0 0 0 0 120 0 0 2960 0 0 88 12 0 0 1 0 1295868 0 0 0 0 4 0 0 4235 1 2 84 13 0 6 0 0 1292740 0 0 0 0 0 0 0 5003 1 1 98 0 0 4 0 0 1300860 0 0 0 0 0 120 0 11194 4 3 93 0 0 4 1 0 1304576 0 0 0 0 240 0 0 11259 4 3 88 6 0 3 1 0 1298952 0 0 0 0 0 0 0 9268 7 5 70 19 0 3 1 0 1303740 0 0 0 0 88 8 0 8088 4 3 81 13 0 5 0 0 1304052 0 0 0 0 0 0 0 6348 4 4 93 0 0 0 0 0 1307952 0 0 0 0 0 0 0 7366 5 4 91 0 0 0 0 0 1307744 0 0 0 0 0 0 0 3201 0 0 100 0 0 4 0 0 1294644 0 0 0 0 0 0 0 5514 1 2 97 0 0 3 0 0 1301272 0 0 0 0 0 0 0 11508 4 3 93 0 0 3 0 0 1307788 0 0 0 0 0 0 0 11822 5 3 92 0 0 From what I can tell when the IO goes up the waits go up. Could NFS be the cause here or should I be worried about something else? This is a VPS box on a fiber channel SAN. I'd think the bottleneck wouldn't be the SAN. Comments?

    Read the article

  • What does a connection timeout indicate when performing an NFS mount?

    - by DeeDee
    We have a shiny new QNAP NAS (TS-879U-RP), and I'm trying to mount it to our big ol' RHEL server in the same manner as our other two QNAP NAS devices. The IT department won't give me the root privileges to the NAS, so I can't SSH in (I know, I know). The first thing I did was to, via the QNAP web admin interface, create a network share named "Runs." I then added the IP of the RHEL server to the permissions list: On the RHEL server, I then added the following line to /etc/fstab: [IP of NAS]:/Runs /mnt/gsrnas3 nfs defaults 0 0 Aside from the IP and the specific mount directory name, this is how I mounted the other two NAS devices. I then created the gsrnas3 directory under /mnt/, and then ran `mount /mnt/gsrnas3' I got the following error: mount.nfs: Connection timed out My first thought is that it's a ports issue, but I don't have enough specific experience with this issue to know for sure. I have two other NAS devices by the same manufacturer already mounted to this RHEL server, so that leads me to believe the configuration issue is on the NAS side of things. I can ping the NAS device successfully from the RHEL server. Not being able to SSH into said NAS is a huge hassle, though. Any ideas?

    Read the article

  • File permissions issue with an NFSv4 share, uploaded from a Mac Lion

    - by POP.sicle
    I have an NFSv4 share that was working fine, with Macs using Snow Leopard, to share files across the network. The NFS share has one cloned user/group that all clients autoconnect as. However, when I use a Lion Mac to copy a file from their user directory to the NFS, no other computer (mac SL/mac Lion/Win7) can edit/delete/write to the file that was uploaded - despite having the correct read/write/ex permissions visible on the NFS and through terminal. Attempting to edit the file permissions through Finder completely locks the file. I suspect this has something to do with Lion's ACLs (or maybe its version control) conflicting with NFSv4. Is there a way to disable or ignore extended ACLs or extended file permissions on the NFSv4 side, that would allow users to not run into this conflict? The work around currently is to use NFS Manager and set automounts to ignore ownership but installing NFS manager and configuring automounts for all of the computers seems more troubling than attempting to reconfigure the NFS settings. Advice?

    Read the article

  • configuring linux console email client to check attachments

    - by Christopher
    I need to configure a IMAP4 capable (console-based) email client to - check and edit the name of an attachment ("contains umlauts?" - change character ä to ae) - delete emails that don't fit certain requirements (not PDF, DOC,... not from domain xyz.com) Whether the client can do everything by itself or can just trigger a script on incoming mail doesn't matter. Anyone have an idea with mail client would be suitable for such a task?

    Read the article

  • Free tools for analyzing client-server communication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server communication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analyze client-server traffic? Many thanks!

    Read the article

  • Free tools for analizing client-server comunication issues

    - by roberto
    Hi, we have some issues with a client-server based application and we would like to better understand client-server comunication without going to the software company that sold the application. At least we would like to perform the analysis in parallel. Can you suggest to me a dummy proof application that we can easily get and install to analise client-server traffic? Many thanks!

    Read the article

  • NFSv4 with idmap

    - by HTF
    The following errors appear on the NFS server, could you please advise how I can fix this? Details: System: CentOS release 6.4, NFS: nfs-utils-1.2.3-36 # cat /etc/idmapd.conf [General] Domain = domain.com [Mapping] Nobody-User = nobody Nobody-Group = nobody [Translation] Method = nsswitch Sep 3 08:25:28 snode1 rpc.idmapd[1382]: nss_getpwnam: name '0' does not map into domain 'domain.com' Sep 3 08:25:29 snode1 rpc.idmapd[1382]: nss_getpwnam: name '500' does not map into domain 'domain.com' EDIT: 03 Sep 2013 10:41 Please note that I'm using NFSv4 and these errors appear on NFS server only (not NFS clients). Server: # cat /etc/sysconfig/nfs MOUNTD_NFS_V2="no" MOUNTD_NFS_V3="no" ... RPCNFSDARGS="-N 2 -N 3" Clients: # cat /etc/fstab server:/ /data nfs4 defaults,hard,intr,timeo=15,_netdev,noatime,nodiratime,nosuid 0 0

    Read the article

  • Web Servicet Client in JBOSS 5.1 with JDK6

    - by dcp
    This is a continuation of the question here: http://stackoverflow.com/questions/2435286/jboss-does-app-have-to-be-compiled-under-same-jdk-as-jboss-is-running-under It's different enough though that it required a new question. I am trying to use jdk6 to run JBOSS 5.1, and I downloaded the JDK6 version of JBOSS 5.1. This works fine and my EAR application deploys fine. However, when I want to run a web service client with code like this: public static void main(String[] args) throws Exception { System.out.println("creating the web service client..."); TestClient client = new TestClient("http://localhost:8080/tc_test_project-tc_test_project/TestBean?wsdl"); Test service = client.getTestPort(); System.out.println("calling service.retrieveAll() using the service client"); List<TestEntity> list = service.retrieveAll(); System.out.println("the number of elements in list retrieved using the client is " + list.size()); } I get the following exception: javax.xml.ws.WebServiceException: java.lang.UnsupportedOperationException: setProperty must be overridden by all subclasses of SOAPMessage at org.jboss.ws.core.jaxws.client.ClientImpl.handleRemoteException(ClientImpl.java:396) at org.jboss.ws.core.jaxws.client.ClientImpl.invoke(ClientImpl.java:302) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:170) at org.jboss.ws.core.jaxws.client.ClientProxy.invoke(ClientProxy.java:150) Now, here's the really interesting part. If I change the JDK that my the code above is running under from JDK6 to JDK5, the exception above goes away! It's really strange. The only way I found for the code above to run under JDK6 was to take the JBOSS_HOME/lib/endorsed folder and copy it to JDK6_HOME/lib. This seems like it shouldn't be necessary, but it is. Is there any other way to make this work other than using the workaround I just described?

    Read the article

  • Wordpress content authoring tutorial for non-techie client....

    - by metal-gear-solid
    I made a website for client in wordpress and client will ad ur own content. Client doesn't know how to handle Wordpress and XHTML CSS. but he knows MS word 2007. Client is on remote location.Is there any easy to understand article/video tutorials to give to client on how he can understand wordpress admin and add content/images/video using editor? and how to disable unneeded things for client from wordpress admin ?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >