Search Results

Search found 7533 results on 302 pages for 'knowledge transfer'.

Page 140/302 | < Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >

  • How much does it wear an SD card to be frequently removed/reinserted?

    - by jtbandes
    My digital camera (a Sony a55) stores photos on an SD card. When I want to transfer these to my computer (a mid-2010 MacBook Pro), I have two options: use the USB cable to connect the camera to the computer, or use the computer's built-in SD card reader. The camera's SD card slot is the standard click-in, click-out (spring-loaded) mechanism. My laptop has a simple slot into which the card slides with a little more resistance than the former (the card slides only about halfway in so it can be easily removed). I notice that the card's contacts now have some shiny marks from one or both of these card slots: Does this type of wear threaten to significantly damage the card? Should I avoid switching the card between slots frequently, to extend its lifetime?

    Read the article

  • Windows file sharing connects over WiFi instead of LAN

    - by zacaj
    I have a laptop and a desktop computer, and I need to sync lots of files to the laptop and back whenever I go on a trip, etc. I've got a LAN cable connected into an extra port on the desktop that I plug into the laptop so I can get gigabit file transfers instead of wireless G. They connect fine. If I do an FTP transfer, for instance, using the LAN IP addresses, it goes at ~40MB/s, as it should. However when I copy files using explorer and native windows file sharing it detects the other computer by name, not IP (eg \\DESKTOP-PC\ instead of \\192.168.0.100\) and always connects to it by its wireless IP address instead of the faster LAN address. Both computers are running Windows 7. I have tried editing the priorities of the adapters in Advanced Settings and putting the LAN adapters above the wifi ones, but this didn't have any effect

    Read the article

  • A simple Volume Replication Tool for large data set?

    - by Jin
    I'm looking for a solution to the following: Server A (Site A) - Win 2008 R2 - approx 10TB (15TB max) of data - well over 8 million files Server B (Site B) - Win 2008 R2 I want to assynchronously replicate Server A's volume to a volume on Server B for data redundancy. Something that I can say to my users, "go here for data" when/if Server A goes belly up due to machine problems, disaster, etc. Windows 2008 R2 does have DFS, but microsoft does not apparently support this large of a dataset (or more accurately, more than 8 million files - according to the docs I could find). I also looked at Veritas Volume Replication, but this seems almost too much as I would also require Veritas Volume Manager. There are numerous "back-up" software which makes a 1-1 backup, which would be ok, but since it will be transfering over internet, I'd like something that has compression during transfer like DFS has. Does anyone have any suggestions regarding this?

    Read the article

  • TFTP PUT Failing Across Hosts

    - by Jason
    I have a TFTP server installed on a CentOS host. /etc/xinetd.d/tftp: service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -c -s /var/lib/tftpboot per_source = 11 cps = 100 2 flags = IPv4 } If I try to PUT a file from a remote host to the host running the TFTP server, I get Transfer Timed Out - however, it does create the file in /var/lib/tftpboot but the file is empty. If I tftp from the tftp server to itself (localhost) and PUT a file, it works fine. I have verified that SELinux is disabled and IPTables are turned off. I can connect from the remote hosts with no issue - just seems to be the PUT I have issue with: [root@SVR01 TEST]# tftp 10.100.2.15 tftp> status Connected to 10.100.2.15. Mode: netascii Verbose: off Tracing: off Literal: off Rexmt-interval: 5 seconds, Max-timeout: 25 seconds tftp>

    Read the article

  • Wordpress blog website apache and IIS subdirectory

    - by Philippe
    The issue is this : Our company has a website hosted on an IIS server. I have recently been given the task to configure a WordPress server for an eventual WordPress blog so that our social media employee could test and see how it works. This was completed successfully and easily on a new server and on a WAMP configuration. The website was published as wordpress.domain.com and works fine. HOWEVER! I have now been requested to ensure that the soon-to-go-online blog would be accessible through the address domain.com/blog. Is there a way to modify the original company website and simple redirect the /blog to the Apache WordPress website? If not, is there a way to transfer the wordpress.domain.com on the IIS server hosting the main website and keep the configuration? Is there a better solution that I haven't thought about (probably)? If so, what would you all suggest?

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system?

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it)

    Read the article

  • Incremental backup services with change only charges?

    - by wowowewah
    I'm looking for online backup services that provide incremental, change-only backups. I'm looking to transfer as little data as possible and would like to find a service that provides full backups every week along with incremental backups every day. Are there any specialist companies that deal with this or do I just use standard backup ones? Any recommendation appreciated. To expand on this Im looking for software/services which work on Unix. I guess Linux is fine aswell as FreeBSDs Linux compatibility layer should run it. Oh and command line would be ideal and not require the use of X Window. Thanks.

    Read the article

  • Strange FTP issues - some files are not downloaded

    - by FractalizeR
    I have a machine, which cannot fetch some files from remote servers by FTP. Machine is powered by CentOS. I tested FTP on three files: 12.09.2012 21:21 166 007 ll091212.002 13.09.2012 11:32 23 040 ll091212.003 13.09.2012 11:50 61 313 ll091212.004 From them, I can always successfully download only one - ll091212.004. Two others are downloaded by about 90% (I can see them on disk) and then FTP transfer hangs without any error messages. I move files, copy them about the remote server - no luck. Another machine from the same subnet can download all three of them easily. I just don't know what's the reason of this.

    Read the article

  • Linux: How to break a large file into smaller files?

    - by Runcible
    I have a giant file (20 gigs) sitting on my source machine and I need to transfer it to my target machine. For the purposes of this question, let's assume that I do not have network connectivity between the two machines. I need to break this file into a series of smaller files, write the smaller files to DVD(s), then re-assemble everything on the target machine. Both source and destination machines are Linux boxes. Is there a way to accomplish this using tar? I have a feeling that I need to use the --multi-volume parameter. What are my options? I need to be able to specify the size of the volume files, in order to make sure that each one will fit onto a single DVD. Thanks!

    Read the article

  • FreeNas running on ESXi - sometimes gets very slugish.

    - by Luma
    Hello everyone, I have a ESXi server (dual quad core, 8GB of DDR3 ram, 6x 1TB WD Blacks running in RAid 5 on the PErc 6/i controller. I have a 64bit freenas VM running, on this VM I keep about 200Gigs of stuff that my windows machines access. every now and then the throughput of this VM just dies, for example right now it can't even handle streaming a song and when I tried to transfer a folder the speed goes from 10-400KB/s. Might I add at this point that the ESXi box has dual gigabit network cards plugged into a good solid gigabit switch and other linux and windows VM's are just fine I have seen speeds over 90MB/s (frequently) The server still has ram left over (plenty actually) and cpu is very low (500-1000mhz) any ideas what could cause this? thanks. Luc

    Read the article

  • One Way Sync of a Bucket With Local Directory

    - by user48651
    I have a local directory that I would like to synchronize with an S3 bucket. I have two specific requirements: If local file is the same as the remote, do not re-transfer it to the bucket. If some files or directories exist in the bucket but do not exist on local, delete them. Basically the bucket should mirror the local copy and not vice-versa. I looked into s3cmd sync command, but unfortunately requirement 2 is not fulfilled. If files exists in the bucket but not on local copy, they will be copied to the local instead of being deleted.

    Read the article

  • How to tell if you are connected to Wireless B, G or N?

    - by Raheel Khan
    I am using Windows 7 on all wired desktops and wireless laptops in my home network. I recently upgraded my Ethernet switch to Gigabit and instantly noticed an increase in throughput in wired devices. I also bought a Wireless-N WAP but with degredation in wireless file transfer speeds. I have been told that a number of reasons could affect wireless speeds including which WAP is used, how many wireless devices are connected, which security mode is used, etc. However, that remains irrelevant to my question. Each of my laptops claim to support Wireless-N but I cannot seem to figure out how to determine if the laptops are truly running Wireless-N or are connected to the WAP through some sort of mixed-mode. I do not have control of the WAP device so cannot tell what mode it is running in. Is there a way to tell which mode is being used and what the throughput is for each connected device without having access to the WAP interface?

    Read the article

  • Need to set mailx variable to specify the From address

    - by user256817
    Running Oracle Linux 5.8 (which is just re-branded RedHat EL 5.8) I must change the From address. But we have scripts that use mailx which cannot be re-written to use any extra flags, so I'd like to use internal variables instead, which I see on the linux.die.net manpage on mailx is an alternative to the -r flag: -r address Sets the From address. Overrides any from variable specified in environment or startup files. Tilde escapes are disabled. The -r address options are passed to the mail transfer agent unless SMTP is used. This option exists for compatibility only; it is recommended to set the from variable directly instead. (Source: http://linux.die.net/man/1/mailx) How can we use these mailx variables? I tried adding this to /root/.mailrc, no go: set [email protected] I also added that to /etc/mail.rc with no gold. So I am turning to you, SuperUsers...

    Read the article

  • USB Adapter for Memory Cards

    - by ktm5124
    I am looking for something like a USB adapter for video cards. It is a cable that on one end hooks into a USB port on a computer and on the other accepts any one of a variety of video cards. Essentially it's an "all-in-one" USB adapter for video cards. I'm told that they are sold all over... does anyone know what it is called or where to find them? I should clarify, by the way, that by "video card" I mean a memory card for a video camera, and that the goal is to read this video data onto a computer, from a variety of video card types, through a USB port. That way if you go to your friend's house and bring your computer, you can transfer the video data on his video camera to your computer, trusting that the adapter will have a slot for his kind of video card.

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • BITS http download job fails to connect for owner Local SYSTEM account

    - by MikeT
    A service I have written that uses BITS (Background Intelligent Transfer Service) to auto update itself is having a problem on some machines (Windows 7 so far). I have been investigating and have discovered that some of the jobs that my service adds to the bits queue are failing immediately with the error code 0x80072efd (a connection with this server could not be established). The is not problem with connecting to the server for the download as it works fine on the same machine using IE (or any other web browser) and other clients can connect and update from the same server. I tried using the BITSADMIN.exe tool to add the jobs manually and they worked ok. I then changed the account my service was running under to the network service account so the bits jobs would be created with a different owner and the jobs completed successfully. My question is I don't want to run my service as this account as it wont have the required local permissions, so how to I change the permissions of the local system user to allow it to download from the HTTP source, I'm not aware of any way of this being restricted for this account but it obviously is.

    Read the article

  • YSLow says certain CSS are not gzipped

    - by rhand
    YSlow keeps on telling me files like http://www.example.com/wp-content/plugins/q-and-a/css/q-a-plus.css?ver=1.0.6.2 are not gzipped while the gzip test tool at Feed the Bot mentions I am all good: Compressed? Yes Compression type gzip Page size (Bytes) 32,493 Compressed size (Bytes) -1 Saving (Bytes) 32,494 Compression % 100% I added this to my .htaccess: # Gzip <ifModule mod_gzip.c> mod_gzip_on Yes mod_gzip_dechunk Yes mod_gzip_item_include file .(html?|txt|css|js|php|pl)$ mod_gzip_item_include handler ^cgi-script$ mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime ^application/x-javascript.* mod_gzip_item_exclude mime ^image/.* mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.* </ifModule> #Deflate <ifmodule mod_deflate.c> AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript </ifmodule> The header for the file mentioned states: CF-Cache-Status MISS CF-RAY 13945df90a9a0c1d-AMS Cache-Control public, max-age=2592000 Connection keep-alive Content-Encoding gzip Content-Type application/javascript Date Thu, 12 Jun 2014 07:34:38 GMT Expires Sat, 12 Jul 2014 07:34:38 GMT Last-Modified Thu, 21 Feb 2013 01:29:18 GMT Server cloudflare-nginx Transfer-Encoding chunked Vary Accept-Encoding Any ideas what I am missing here?

    Read the article

  • Very High Network out in ec2 instance

    - by Jatin
    I launched an ubuntu-14.04-64bit instance in Amazon EC2 two days back. And I started Tomcat 7.0.54 in that instance and deployed my application war files. It has no other software installed other than tomcat and the default ones. In the past 2 days, its shows 858 GB of Data Transfer(Network Out) from that instance. I have attached a graph of Amazon CloudWatch Metric "Network Out" My application does not do any data download/upload. Its a Java Spring application and the front end is in HTML&Javascript. My application traffic was very low (less than 20 hits) in those 2 days. Is there a way to find out why these data transfers happened and also to find what data has been transferred. If you can see in graph, network out was 20gb per minute. Some more info: Network in was negligible CPU Utilization was very high Everything else was low

    Read the article

  • SPF record doesn't work (not sure which DNS server to tweak)

    - by Ion
    Problem: Google (and perhaps others) marks our emails as SPF neutral. Let me give you some background about the setup: initially got a dedicated server (Hetzner) with Plesk installed to host a domain/web application, let's say: bigjaws.com. Plesk automatically creates a DNS zone for it with some records for the various services it provides out of the box, e.g. webmail.bigjaws.com as a CNAME to bigjaws.com to provide Horde/whatever, etc. Let me point out four relevant of these records (where XXX.XXX.XXX.158 is our dedicated IP): bigjaws.com. A XXX.XXX.XXX.158 mail.bigjaws.com. A XXX.XXX.XXX.158 bigjaws.com MX (10) mail.bigjaws.com. bigjaws.com. TXT v=spf1 +a +mx -all The above records are not(?) valid anymore though, because after using this dedicated server for a while, our site got bigger and bigger so we decided to move our operations over to AWS (EC2, RDS, ELB, etc), but we retained the mail functionality as is, i.e. emails from [email protected] are sent by connecting to our dedicated server where Plesk takes care of things. This was decided in order not to setup anything from scratch. Of course for all DNS-related things we now use Route53. In Route53 I have the following records: mail.schoox.com. A XXX.XXX.XXX.158 bigjaws.com. MX (10) mail.bigjaws.com bigjaws.com. SPF "v=spf1 +ip4:XXX.XXX.XXX.158 +mx ~all" From my understanding of SPF, the SPF status should have been passed: I designate that all email being sent by bigjaws.com from XXX.XXX.XXX.158 are valid/not spam (I added +mx there but I'm not sure if needed). When a mail server receives an email, doesn't it lookup the SPF record of the domain and checks against the IP it got the email from? Checking with spfquery: root@box:~# spfquery -ip XXX.XXX.XXX.158 -sender [email protected] -rcpt-to [email protected] StartError Context: Failed to query MAIL-FROM ErrorCode: (2) Could not find a valid SPF record Error: No DNS data for 'bigjaws.com'. EndError noneneutral Please see http://www.openspf.org/Why?id=employee1%40bigjaws.com&ip=XXX.XXX.XXX.158&receiver=spfquery : Reason: default spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com Received-SPF: neutral (spfquery: XXX.XXX.XXX.158 is neither permitted nor denied by domain of bigjaws.com) client-ip=XXX.XXX.XXX.158; [email protected]; If I go to the address listed above (openspf.org) it tells me that the message should have been accepted(!): spfquery rejected a message that claimed an envelope sender address of [email protected]. spfquery received a message from static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) that claimed an envelope sender address of [email protected]. The domain bigjaws.com has authorized static.158.XXX.XXX.XXX.clients.your-server.de (XXX.XXX.XXX.158) to send mail on its behalf, so the message should have been accepted. It is impossible for us to say why it was rejected. What should I do? If the problem persists, contact the bigjaws.com postmaster. Also, here are some headers from an email sent by one of our [email protected] addresses to a gmail.com address (by the way, bigjaws.de listed in the "Received: from" field was the initial domain hosted on the dedicated server before adding the .com one -- both are still listed as separate subscriptions under Plesk). Delivered-To: [email protected] Received: by 10.14.177.70 with SMTP id c46csp289656eem; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) X-Received: by 10.14.102.66 with SMTP id c42mr306186eeg.47.1382515860386; Wed, 23 Oct 2013 01:11:00 -0700 (PDT) Return-Path: <[email protected]> Received: from bigjaws.de (static.158.XXX.XXX.XXX.clients.your-server.de. [XXX.XXX.XXX.158]) by mx.google.com with ESMTPS id l4si19438578eew.161.2013.10.23.01.10.59 for <[email protected]> (version=TLSv1 cipher=RC4-SHA bits=128/128); Wed, 23 Oct 2013 01:10:59 -0700 (PDT) Received-SPF: neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) client-ip=XXX.XXX.XXX.158; Authentication-Results: mx.google.com; spf=neutral (google.com: XXX.XXX.XXX.158 is neither permitted nor denied by best guess record for domain of [email protected]) [email protected] DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bigjaws.com; b=WwRAS0WKjp9lO17iMluYPXOHzqRcOueiQT4rPdvy3WFf0QzoXiy6rLfxU/Ra53jL1vlPbwlLNa5gjoJBi7ZwKfUcvs3s02hJI7b3ozl0fEgJtTPKoCfnwl4bLPbtXNFu; h=Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:Subject:Content-Type:Content-Transfer-Encoding; Received: (qmail 22722 invoked from network); 23 Oct 2013 10:10:59 +0200 Received: from hostname.static.ISP.com (HELO ?192.168.1.60?) (YYY.YYY.ISP.IP) by static.158.XXX.XXX.XXX.clients.your-server.de. with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 23 Oct 2013 10:10:59 +0200 Message-ID: <[email protected]> Date: Wed, 23 Oct 2013 11:11:00 +0300 From: BigJaws Employee <[email protected]> User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1 MIME-Version: 1.0 To: [email protected] Subject: test SPF Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit test SPF Any ideas why SPF is not working correctly? Also, are there any DNS settings that are not needed anymore and create a problem?

    Read the article

  • How do I copy hyperlink only (and not text) to another cell?

    - by OfficeLackey
    I have a spreadsheet where column A displays names. There are a few hundred names and each has a different hyperlink (which links to that person's web page). I want to transfer those hyperlinks across to a different column which has different text in and no hyperlinks. Not every cell in column A has a hyperlink. There are groups of cells merged together, so A2:A7 has one link, A8:A13 the next, A9:10 the next (i.e. number of cells merged is not uniform). e.g. where A2:A7 reads "Bob" and links to www.bob.com, I want I2:I7, which reads, "Smith," and does not link to anything, to link to www.bob.com. I want to do this repeatedly, copying links from A2:A579 into I2:I579. The information is copied from a table within a web page, and that is where the hyperlinks come from.

    Read the article

  • Reg: Putty SSH Access

    - by gourav
    Dear all, I have a Linux based virtual server recently purchased. I need to transfer the files from local computer to my virtual server... I tried downloading the Putty. But there were no EXE files to install. i am using Windows XP at home. If possible, can i have to installer link. Do we need to know Linux compulsorily for using this Putty. And also is there any other tool which can be used by users who dont know linux commands. Please help.

    Read the article

  • Nginx Cache-Control

    - by optixx
    Iam serving my static content with ngnix. location /static { alias /opt/static/blog/; access_log off; etags on; etag_hash on; etag_hash_method md5; expires 1d; add_header Pragma "public"; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } The resulting header looks like this: Cache-Control:public, must-revalidate, proxy-revalidate Cache-Control:max-age=86400 Connection:close Content-Encoding:gzip Content-Type:application/x-javascript; charset=utf-8 Date:Tue, 11 Sep 2012 08:39:05 GMT Etag:e2266fb151337fc1996218fafcf3bcee Expires:Wed, 12 Sep 2012 08:39:05 GMT Last-Modified:Tue, 11 Sep 2012 06:22:41 GMT Pragma:public Server:nginx/1.2.2 Transfer-Encoding:chunked Vary:Accept-Encoding Why is nginx sending 2 Cache-Control entries, could this be a problem for the clients?

    Read the article

  • High disk time on sql-server

    - by Patrik
    Hi We have a dedicated sql-server 2008 r2 enterprise edition. The setup is: D: (data files) - stored on local ssd disks (not the same disks as log files) (raid 10) E: (log files) - stored on local ssd disks (not the same disks as data files) (raid 1) F: (transaction log backup) - stored remote on a SAN Today we moved our log files to new disks (from F: to E:). From a shared volume ( F:(SAN)) to dedicated local disks (E:). What then happend was that the "disk time", "avg. transfer time" and "avg disk write queue length" increased on the volume where we have the data files (D:) (not on the volume where the log files are located). The data volume and log volume does not share disks, however they share the same controller card. "Disk idle time" is low for all volumes. One thought is ofcourse that the controller card might be overloaded. But, we need more ideas on where the problem might be.

    Read the article

  • Upgrading Active Directory from 2000 to 2008

    - by Doug
    Our config is currently: 1 Windows 2000 domain controller running ISA2000, dhcp, dns 1 Windows 2003 domain controller as main file server, prob cert server as well, dhcp, dns 1 Windows 2008/Exchange2010 domain controller as Exchange server, DHCP,DNS Currently getting FRS errors on files server journalwrap error Currently getting FRS errors on othe DC's can't replicate from above Exchange DC holds Schema, rid,pdc, and infastructure roles File Server holds Domain namaing operation master role WOW, I didn't set this up, just inherited it. Am I right to assume that fixing the FRS errors is #1, what do I need to do for that? set enable journalwrap auto restore in registry? Demote W2000 domain controller, should that have any implications for ISA? We have Forefront to be deployed but that's another day Transfer Domain Nameing Role to Exchange server (I know or think having an Exchange server as DC isn't best practive) We will be getting another server W2008 to replace current file server and I thought it could takeover all roles once deployed Demote W2k3 file server and then raise functional domain level to 2008 Am I missing anything other that the sense to walk away? Thanks

    Read the article

< Previous Page | 136 137 138 139 140 141 142 143 144 145 146 147  | Next Page >