Search Results

Search found 137626 results on 5506 pages for 'linked lists using c'.

Page 231/5506 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Using Zebra LP 2844-Z over the network

    - by Jason Kealey
    Hello, I am looking for a network-based label printer. I am looking at this Zebra LP-2844-Z printer. Unfortunately, it does not come with a network interface like the lower-end LP 2824 Plus. The ZebraNet 10/100 Print Server is both expensive for what it does (~$600) and only seems to support wireless networking, not wired. I prefer wired for reliability. Questions: Can I use a cheaper off-the-shelf print server to turn the LP-2844Z to a network printer. Would I get any trouble communicating with the printer via its own programming language or via OPOS? (instead of the Windows driver) Are cheaper print servers reliable? Would I be better to get another printer model that has it built in directly to avoid having issues due to the print server? What other printer would you recommend?

    Read the article

  • gpupdate failing when using Samba 4 AD DC

    - by darthfoolish
    I have a Samba 4 AD domain running with 2 DCs on Centos 6.5, with a named DNS backend. I have multiple Windows 7 machines joined to this domain, which is fine. However, I can't get GPOs to apply. When running gpupdate, I get the following output The processing of Group Policy failed. Windows attempted to read the file \\sysvol\\Policies{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini Obviously, you don't normally see what it's trying to connect to when it's successful, but I would have thought the first place shows up, I should be seeing So, what governs what data gets put in between those angle brackets? If it is just supposed to be the domain, then what else could be going wrong? Thanks in advance for any help.

    Read the article

  • How to solve "Broken Pipe" error when using awk with head

    - by Jon
    I'm getting broken pipe errors from a command that does something like: ls -tr1 /a/path | awk -F '\n' -vpath=/prepend/path/ '{print path$1}' | head -n 50 Essentially I want to list (with absolute path) the oldest X files in a directory. What seems to happen is that the output is correct (I get 50 file paths output) but that when head has output the 50 files it closes stdin causing awk to throw a broken pipe error as it is still outputting more rows.

    Read the article

  • Windows Virtual Machine not seen by host (Mac OS X) using VMWare Fusion

    - by Malkuth
    Hi, I installed Windows XP with VMware Fusion on my MacBook and while internet works, Windows can ping the Mac, etc. from the Mac or any other machine in the network we can not see the Virtual Machine. In between, I use bridged option and obtain the the VM's IP dynamically; tried also static assignment from the free addresses but the problem persisted. Any ideas what is wrong?

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Hi! Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Software to aid using camera as a scanner

    - by xxzoid
    I want to digitize some index cards with my camera. I'm looking for a program that would automatically fix geometry on the shots (as you would expect the cards come tilted on the picture). I have an app (droid scan lite) on my android phone that does exactly that, but I would prefer to do it on my pc (the phone camera has poor quality and it's slow and focuses badly while I have a decent slr). If the program is open source it's an advantage, cross platform -- even more so.

    Read the article

  • How to batch rename files using bash

    - by Alex Popov
    I know there are lots of such questions, but I couldn't find one (or a combination of several), which describes the things I want to do. I think I need to use regular expressions, but I am not very good with that. I use zsh. I have a folder with files, which I want to rename: I want the files challenge1.rb, challenge2.rb, challenge3.rb, etc. to be renamed to c1.rb, c2.rb etc. Similarly task1.rb and similar must be renamed to t1.rb etc. sample_spec_c1.rb, sample_spec_c2.rb etc. must be renamed to c1_spec.rb, c2_spec.rb etc. So I guess I need some combination of regular expressions and iteration, but I don't know how to write the bash script.

    Read the article

  • Nginx $scheme doesn't always work while using SSL for one specific page

    - by jjiceman
    I read and followed this question in order to configure nginx to force SSL for one page (admin.php for XenForo), and it is working well for a few of the site administrators but is not for myself. I was wondering if anyone has any advice on how to improve this configuration: ... ssl_certificate example.net.crt; ssl_certificate_key example.key; server { listen 80 default; listen 443 ssl; server_name www.example.net example.net; access_log /srv/www/example.net/logs/access.log; error_log /srv/www/example.net/logs/error.log; root /srv/www/example.net/public_html; index index.php index.html; location / { if ( $scheme = https ){ rewrite ^ http://example.net$request_uri? permanent; } try_files $uri $uri/ /index.php?$uri&$args; index index.php index.html; } location ^~ /admin.php { if ( $scheme = http ) { rewrite ^ https://example.net$request_uri? permanent; } try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS on; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } } ... It seems that the extra information in the location ^~ /admin.php block is unecessary, does anyone know of an easy way to avoid duplicate code? Without it it skips the php block and just returns the php files. Currently it applies https correctly in Firefox when I navigate to admin.php. In Chrome, it downloads the admin.php page. When returning to the non-https website in Firefox, it does not correctly return to http but stays as SSL. Like I said earlier, this only happens for me, the other admins can go back and forth without a problem. Is this an issue on my end that I can fix? And does anyone know of any ways I could reduce duplicate configuration options in the configuration? Thanks in advance!

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • Spam daily report on servers using sendmail

    - by Simone Magnaschi
    Hello, we'd like to implement a daily spam report to our users like Dreamhost does. Basically we need to send a daily mail (at midnight for example) to inform a single user of all the emails currently in his spam folder with the related score to let them look right away if there's a false positive. We use a basic sendmail server with procmail to redirect spam to the spam folder in each home directory. Do you know if is there a perl script or some other tool that does just that? Thank you very much

    Read the article

  • Using smartctl to get vendor specific Attributes from ssd drive behind a SmartArray P410 controller

    - by Lairsdragon
    Recently I have deployed some HP server with SSD's behind a SmartArray P410 controller. While not official supported from HP the server work well sofar. Now I like to get wear level info's, error statistics etc from the drive. While the SA P410 supports a passthru of the SMART Command to a single drive in the array the output I was not able to the the interesting things from the drive. In this case especially the value the Wear level indicator is from interest for me (Attr.ID 233), but this is ony present if the drive is directly attanched to a SATA Controller. smartctl on directly connected ssd: # smartctl -A /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 5 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 3 Spin_Up_Time 0x0000 100 000 000 Old_age Offline In_the_past 0 4 Start_Stop_Count 0x0000 100 000 000 Old_age Offline In_the_past 0 5 Reallocated_Sector_Ct 0x0002 100 100 000 Old_age Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 8561 12 Power_Cycle_Count 0x0002 100 100 000 Old_age Always - 55 192 Power-Off_Retract_Count 0x0002 100 100 000 Old_age Always - 29 232 Unknown_Attribute 0x0003 100 100 010 Pre-fail Always - 0 233 Unknown_Attribute 0x0002 088 088 000 Old_age Always - 0 225 Load_Cycle_Count 0x0000 198 198 000 Old_age Offline - 508509 226 Load-in_Time 0x0002 255 000 000 Old_age Always In_the_past 0 227 Torq-amp_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 228 Power-off_Retract_Count 0x0002 000 000 000 Old_age Always FAILING_NOW 0 smartctl on P410 connected ssd: # ./smartctl -A -d cciss,0 /dev/cciss/c1d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net (Right, it is complety empty) smartctl on P410 connected hdd: # ./smartctl -A -d cciss,0 /dev/cciss/c0d0 smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Current Drive Temperature: 27 C Drive Trip Temperature: 68 C Vendor (Seagate) cache information Blocks sent to initiator = 1871654030 Blocks received from initiator = 1360012929 Blocks read from cache and sent to initiator = 2178203797 Number of read and write commands whose size <= segment size = 46052239 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 3363.25 number of minutes until next internal SMART test = 12 Do I hunt here a bug, or is this a limitation of the p410 SMART cmd Passthru?

    Read the article

  • Active Directory LDAP and user issues (using apache2 for svn access)

    - by CaCl
    I currently have a setup where I work that lets users use their active directory domain logins and passwords to authenticate and authorize access to Subversion. Currently I need to allow application accounts the same access. So our IT group creates application accounts in the active directory for us to use. But they want to be "secure" so they set the "Workstations Allowed" to be only a limited number of workstations. So when an application account hits the apache2 server for authentication they can't login for some reason and I'm having a heck of a time trying to debug. The error logs only show me: [Tue Apr 06 11:24:25 2010] [warn] [client 24.24.24.24] [3469] auth_ldap authenticate: user appuser13 authentication failed; URI /svn [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue Apr 06 11:24:25 2010] [error] [client 24.24.24.24] user appuser13: authentication failure for "/svn": Password Mismatch I've checked the password numerous times and it appears to be correct but I can't seem to get the user to authenticate properly. Below is a snippet of the apache configuration for ldap: # Auth providers # Active Directory <AuthnProviderAlias ldap ldap1> AuthBasicProvider ldap AuthLDAPURL "ldap://dmain.company.com:389/dc=dmain,dc=company,dc=com?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "CN=svnuser13,OU=Application Accounts,dc=dmain,dc=teradata,dc=com" AuthLDAPBindPassword secret3 </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias ldap ldap2> AuthBasicProvider ldap AuthLDAPURL ldap://diffldapserver:389/dc=specialusers,dc=com?uid </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias file file1> AuthUserFile /var/svn/auth/htpasswd </AuthnProviderAlias> <Location /svn> DAV svn SVNPath /var/svn Satisfy Any Require valid-user AuthType Basic AuthName "SVN Repository" AuthBasicProvider ldap1 file1 ldap2 AuthzSVNAccessFile /var/svn/auth/access AuthzLDAPAuthoritative on Require valid-user </Location> Any help, like tips for debugging is appreciated!

    Read the article

  • SVN Backup using rsync command

    - by user37143
    Hi, I setup a cron for taking backup of my SVN repo( 8 GB) to another server. But some times I get errors and I feel that this is not the proper way to back an svn to a remote server. I used the command rsync -avz myrepo. Please suggest me a good way to do svn backup to a remote server. I cannot zip the files and transfer it daily since it's 7 GB. Thanks

    Read the article

  • Most transparent way to connect two LANS using a WET610N Wireless Bridge

    - by Spencer Ruport
    I have two wired systems hooked to a Linksys WRT54GL wired/wireless router which is also hooked to my internet. I'll refer to this as LAN1. I have two more systems in another room that are connected wirelessly. Recently I decided I would much rather have another wired LAN in the other room and use a bridge to connect them. This would be LAN2. Prior to hooking up the device I assumed that the ethernet side of the bridge would have a DHCP server so that I could simply hook it up to a switch and I'd be on my way. However that isn't the case which leads me to believe I'll have to add one to LAN2 correct? Or is there some way to have the DHCP from LAN1 also hand out IP addresses to LAN2? If I do need a DHCP device on LAN2 what would be best? Another hardware device or should I just install some DHCP software on one of the systems (since they're both on 24/7 anyway). Any recommendations would be appreciated. :)

    Read the article

  • Cannot access host from a virtualbox guest using bridged adapter

    - by David Dai
    I have a windows 7 host with firewall turned off. And I have a windowsXP guest running on Virtualbox 4.2.4r81684. In my windowsXP guest I tried to connect to the FTP server on my host machine(which used to work well) but it didn't work. I tried to ping my host machine, but it didn't work either. Then I tried to ping my guest from host, it worked well. my guest ip is :192.168.1.95 my host ip is : 192.168.1.9 route table on guest machine is this: C:\Documents and Settings\wenlong>route PRINT =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...08 00 27 66 54 6c ...... AMD PCNET Family PCI Ethernet Adapter #2 - Packe t Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.95 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.1.0 255.255.255.0 192.168.1.95 192.168.1.95 20 192.168.1.95 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.1.255 255.255.255.255 192.168.1.95 192.168.1.95 20 224.0.0.0 240.0.0.0 192.168.1.95 192.168.1.95 20 255.255.255.255 255.255.255.255 192.168.1.95 192.168.1.95 1 Default Gateway: 192.168.1.1 =========================================================================== Persistent Routes: None arp cache is this: C:\Documents and Settings\wenlong>arp -a Interface: 192.168.1.95 --- 0x2 Internet Address Physical Address Type 192.168.1.1 00-26-f2-60-3c-04 dynamic 192.168.1.9 90-e6-ba-c2-90-2f dynamic It's strange because there was no problem days before and I didn't make any changes to the setting. could anybody help? PS. the guest can communicate with other machines in the LAN(for example 192.168.1.114) ok. it just cannot connect to the host machine.

    Read the article

  • Using QoS to prioritize IP addresses

    - by Tristan
    I have a Western Digital N900 router. I was hoping I'd be able to throttle users based on their MAC address with it, which isn't possible sadly. Seems simple in principle though, duh. The battle against bandwidth hogging roomates rages on. Could I just set the local IP range to their IP, and then set the Local port range to every single port in existence. Then prioritize their IP to lower than mine? Will this work? What are all the ports? And what's the difference between Local and Remote IPs or Ports? Name: Roomate, Priority: Low, Protocol: TCP or UDP ??, Local IP Range: .101 to .101, Local Port Range: 0 to infinity, Remote IP Range: ? to ?, Remote Port Range: ? to ?

    Read the article

  • rsync to multiple destinations using same filelist?

    - by Dylan B.
    I'm wondering if it's possible for rsync to copy one directory to multiple remote destinations all in one go, or even in parallel. (not necessary, but would be useful.) Normally, something like the following would work just fine: $ rsync -Pav /junk user@host1:/backup $ rsync -Pav /junk user@host2:/backup $ rsync -Pav /junk user@host3:/backup And if that's the only option, I'll use that. However, /junk is located on a slow drive with quite a few files, and rebuilding the filelist of some ~12,000 files each time is agonizingly slow (~5 minutes) compared to the actual transfer/updating. Is it possible to do something like this, to accomplish the same thing: $ rsync -Pav /junk user@host1:/backup user@host2:/backup user@host3:/backup Thanks for looking!

    Read the article

  • How do I remove initial indents on numbered lists?

    - by Peter
    In Word 2007 I want all numbers in a numbered list to be down the LH margin in line with the paragraphs. When a numbered list is selected, the numbers 1,2,3 are indented by a default 0.63cm. Ctrl-Shift-M will shift this indent back to the left margin. How do I permanently remove that initial indent and save that change to the normal template so that all new documents have zero indent on a newly inserted numbered list? (Same issue in Word 2010)

    Read the article

  • Using NDMP as an alternative to CIFS mount

    - by user138922
    I have a weird but interesting use-case. I use CIFS to mount shares from a File Server (NetApp, EMC etc) to an application server (win/linux server where my application runs). My application needs to process each of the file from the shares that I mount via CIFS. My application also needs access to the meta-data of these files such as Name, Size, ACLs etc. I would like to see if I can achieve the same via NDMP. I have some very basic questions regarding this use-case. It would be great if you guys can help me out here. Is this even something which is achievable? Can I just transfer share that interest me instead of entire volume?

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >