Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 1039/2386 | < Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >

  • Real RAM latency

    - by user32569
    Hi, very quick question. When I look for RAM timing, I got 2 different explanations on what is CAS latency. First states, its the time after command to read has been issued from CPU and data are send to data bus. Second says its time betwen column in memory layout has been activated. So, where is the truth? I mean, when I want to know total RAM latenci, in case 1, ot would be just CAS times one clock time. In second case, it would be CAS+other things like RAstoCAS and so times one clock time. Thanks.

    Read the article

  • What does the [0/0] indicator mean when entering copy mode in tmux?

    - by bps
    When entering copy mode in tmux, an indicator in the upper right corner shows "[0/0]". I can't find any documentation in the man page about what these numbers mean, and it's difficult to search since Google throws away the brackets and slash. This is generated by window_copy_write_line() in window-copy.c: if (py == 0) { size = xsnprintf(hdr, sizeof hdr, "[%u/%u]", data->oy, screen_hsize(data->backing)); if (size > screen_size_x(s)) size = screen_size_x(s); screen_write_cursormove(ctx, screen_size_x(s) - size, 0); screen_write_puts(ctx, &gc, "%s", hdr); but the variable names aren't very instructive to someone who isn't familiar with the code. Any hints as to what these numbers mean?

    Read the article

  • XP SP3 PRO - Delayed write failed $mft- can I get which particular file caused the problem ?

    - by user35020
    Hi - I sometimes get that error when resuming from hibernation : Delayed Write Failed : Windows was unable to save all the data for the file G:\$Mft. The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere. I know this is caused because the hard drive (G:, a usb external drive) which was plugged in when I hibernated was not read at the right moment - or sometimes I simply forgot to plug it when resuming from hibernation. My question is : is there any way to see which particular file/folder/folder status/don't know what failed to be written ? Hard drive functions correctly before and after - no problem. Is there a detailed log someplace or a utility ? Searched and searched but nothing. Thanks for any help !

    Read the article

  • TCP Sessions and IP Changes

    - by Kyle Brandt
    What happens to a TCP session when the IP of a client changes? I did a simple test of having netcat listen on a port, and connecting to that port from a client machine. I then changed the IP of the client while that nc session was open and sent some data, no data was received by server after changing the IP. I know they are different layers, but does TCP use IPs for part of how it distinguishes sessions? Does my example not work because of how the application handles it, or is this not working because of something happening at TCP/IP/Ethernet layers? Does this depend on the OS implementation? ( I am most interested in Linux at the moment)

    Read the article

  • Best way to backup Active Directory with a single domain controller

    - by John Hall
    I have a domain with about 15 users and a single Windows Server 2008 domain controller. Some recent issues with my RAID controller have made me reconsider how I go about securing the AD data. Currently I run a System State Backup nightly. However, it seems that it is impossible (or at least difficult and unsupported) to restore that to any other machine than the one from which it was taken. Adding a second DC to the domain seems expensive and overkill for such a small network. Is there no other way to backup the AD data?

    Read the article

  • Need help automating a task in Linux

    - by Niphoet
    I'm still kind of new to Linux, but here's what I'm trying to do. I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need (cp ???). Thanks in advance!

    Read the article

  • How to sort time column by value instead of alphabetically

    - by Turch
    I'm creating a pivot table by connecting to an SSAS tabular model (Data - From Other Sources - From Analysis Services) . The model has a "time" column that I want to sort by. The default (database) sorting is earliest to latest: When I click the triangle next to 'Row Labels' and select "Sort A to Z", I get alphabetically sorted times: How can I get the times to sort by time? Changing the number format from "General" to "Time" does nothing. The times aren't stored as text either - the data type of the column in the SSAS model is Auto (Date)

    Read the article

  • Memcached clustered alternative

    - by Johan Kooijman
    I'm looking to replace memcached. We have a LOT of traffic to our central memcached node which I'd like to split. There's only so much trunking networks I can do. My general idea is to install a memcached-type daemon on every webserver and have the daemons replicate set/delete/updates over all the daemons, so that each webserver connects to a socket or on localhost. All data should be available on all nodes. The alternatives: - repcached (max 2 masters) - redis (single master) - couchdb/mongodb/handlersocket - persistent data on disk, I'd like to remove the disk part to gain more performance. Any hints?

    Read the article

  • Seeking recommendations on resolving sporadic network connectivity latency for Notes client

    - by Russell Maher
    I have Domino servers in geographically disperse data centers in the U.S. Sometimes when I open an NSF on one of those servers the connection times out then when I open the NSF again it connects immediately. This has been going on for years and during that time I have upgraded and changed my own internet connection and moved servers to different data centers. Of course I have direct connection documents using fixed IP addresses. When I do a Notes client Trace nothing is out of the ordinary. My business partner experiences the same thing from an entirely different city and different ISP but to the same servers. Never have any trouble connecting to the HTTP server, just over port 1352. Does anyone have any recommendations on a process to determine what is causing this problem?

    Read the article

  • Cross join problem query

    - by user66121
    i have following table structure HUB_DETAILS (Master) Branch_ID Branch_Name VTRCheckList (Master) CLid CLName VTRCheckListDetails (Detail) CLid Branch_ID VTRValue vtrRespDate Actually when i run the following query it does comes with all the Checklist names alongwith all branch names but shows the value in every branch infact only 1 branch has data in the given date criteria. it should show 0 if there is no data in checklist of the respective branch. SELECT VTRCheckList.CLName, Hub_Details.BranchName, sum(cast(VTRCheckListDetails.VtrValue as int)) as 'Total' FROM VTRCheckListDetails INNER JOIN VTRCheckList ON VTRCheckListDetails.CLid = VTRCheckList.CLid CROSS JOIN Hub_Details where Convert(date,VTRCheckListDetails.vtrRespDate, 105) >= convert(date,'01-01-2011',105) and Convert(date, VTRCheckListDetails.vtrRespDate, 105) <= convert(date,'30-01-2011',105) GROUP BY VTRCheckList.CLName, Hub_Details.BranchName

    Read the article

  • phpmyadmin or other mysql gui - edit whole table from one screen

    - by lorem
    Let's say I have table in database. I'd like to be able to view all data from this table and be able to edit every field from single screen. I tried to do this in phpmyadmin but I'm not sure how... I can see all data but I have to click edit on single field and then I'm sent to next screen, can't really edit everything at once. How do I do this in phpmyadmin or other mysql gui? I'm on Linux, my server too. I'd like each field in table to be editable text field - maybe this will show better what I'm searching for.

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote host

    - by disusered
    I am running Ubuntu 10.10 on a remote box. I ssh to it everyday without issues but today out of the blue, I get the following error: ssh_exchange_identification: Connection closed by remote host If I connect with -vv, I get the following: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/bla/.ssh/config debug1: Applying options for ubuntu-server debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ubuntu-server.com [123.123.123.123] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/bla/.ssh/id_rsa type -1 debug1: identity file /Users/bla/.ssh/id_rsa-cert type -1 ssh_exchange_identification: Connection closed by remote host If I remove the key, I get the exact same output (sans "debug2: key_type_...). I've managed to log in physically and checked my hosts.allow and hosts.deny but they have no entries. I tried removing and reinstalling OpenSSH, checked authorized_keys and ~/.ssh permissions and tried connecting from other computers only to get the same error. I'm at my wits end, any help would be greatly appreciated.

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Clarification of the difference between PCI memory addressing and I/O addressing?

    - by KevinM
    Could someone please clarify the difference between memory and I/O addresses on the PCI/PCIe bus? I understand that I/O addresses are 32-bit, limited to the range 0 to 4GB, and do not map onto system memory (RAM), and that memory addresses are either 32-bit or 64-bit. I get the impression that memory addressing must map onto available RAM, is this true? That if a PCI device wishes to transfer data to a memory address, that address must exist in actual system RAM (and is allocated during PCI configuration) and not virtual memory. So if a PCI device only needs to transfer a small amount of data at a time, where there is no advantage to putting it into RAM or using DMA, then I/O addressing is fine (e.g. a parallel port implemented on a PCI card). And why do I keep reading that PCI/PCIe I/O addressing is being deprecated in favour of memory addressing? Thanks!

    Read the article

  • How to change memory for DomU runtime

    - by saffron
    I have a xen server with xen-4.1.3, linux-image-3.2.0-3-amd64, debian squeeze and 16Gb of RAM. The domain-0 has 1Gb of ram, the rest of memory belongs to the hypervisor. I want to start a guest domain with a minimal amount of memory and increase it runtime later. When I start a guest domain with 256Mb of ram and run xm mem-set domu 4Gb, I get ~3Gb only in domu and a guest domain free says: root@test:~# free total used free shared buffers cached Mem: 2830620 72868 2757752 0 2432 43504 -/+ buffers/cache: 26932 2803688 Swap: 1048572 0 1048572 And a guest domain dmesg says: [ 0.000000] Memory: 175912k/2883584k available (3527k kernel code, 448k absent, 2707224k reserved, 3210k data, 612k init) When I start a guest domain with 2Gb of ram I can run xm mem-set domu 7Gb and get ~7Gb of ram in a guest domain: root@test:~# free total used free shared buffers cached Mem: 6828228 74944 6753284 0 1328 12568 -/+ buffers/cache: 61048 6767180 Swap: 1048572 0 1048572 And a guest domain dmesg: [ 0.000000] Memory: 1674960k/16651264k available (3527k kernel code, 448k absent, 14975856k reserved, 3210k data, 612k init) How can I start a guest domain with a minimal amount of ram (256Mb) and increase it under 15Gb?

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • WIndows Hosted Network

    - by Nandakumar V
    I have created a hosted network in my windows7 system. The netsh wlan show hostednetwork command gives the output Hosted network settings ----------------------- Mode : Allowed SSID name : "rambo" Max number of clients : 100 Authentication : WPA2-Personal Cipher : CCMP Hosted network status --------------------- Status : Started BSSID : xx:xx:xx:xx:xx:xx Radio type : 802.11n Channel : 11 Number of clients : 1 xx:xx:xx:xx:xx:xx Authenticated But I have forgot the password for this connection and after some googling I found the command netsh wlan refresh hostednetwork YourNewNetworkPassword. But on executing this command it get the error C:\Users\user>netsh wlan refresh hostednetwork rambo123 Invalid value "rambo123" for command option "data". Usage: refresh hostednetwork [data=]key I have no idea what is wrong with this command.

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Turn off write barriers on ext4 whiche FS is mounted

    - by user462982
    I am doing some IO intensive DB imports that run for several days now and the IO performance has dropped tremendously over times. The DB data files (log files) are on an ext4 formatted logical volume which is mounted with default options (did not specify something special in fstab). Since I just learned that ext4 enables write barriers by default: Q: Is there some way to disable write barriers online (i.e. while the file system is in use), because I cannot interrupt the import and don't want to restart it again. I am aware that write barriers might not be the only thing impeding performance it is a bad idea to have write barriers disabled on journalling file systems if data safty is important (e.g. on a production system)

    Read the article

  • Reading a file from an alternate location

    - by Highstaker
    I have a certain file (data.abc) located in, say, my home folder. I make a copy of it to another location (for example, "/mnt/ramtemp/"). Whenever the file in my home folder is accessed by any process, I want it to be read not from home folder, but from "/mnt/ramtemp/". As you might have guessed from the path of the latter, it is where I mount the ramfs. So, basically, I want a process to access not the file on my HDD (which is slower), but its copy on ramfs (which is way faster). At the same time, I want the file data.abc to remain in my home folder under that name, I don't want to rename or delete it. Is there any way I could guide the system to redirect the processes to read the file from alternative location whenever they try to read it from home folder?

    Read the article

  • What are the recognized ways to increase the size of the RAID array online/offline?

    - by user149509
    Is it possible, in theory, increase the size of the RAID-array of any level just by adding new drive(s)? Variant like "backup whole data - delete old array - add/replace disks - create new array - restore data" is obvious so what are the other options? Does it depend on the RAID-level only or on the implementation of RAID-controller only, or on both? Adding new disks to a striped array necessarily leads to a rebuilding of the array with the redistribution of the strips to the new drives? What steps should be done to increase size of RAID-array in online/offline scenarios? Especially interesting RAID-5 and RAID-10. I would like to see the big picture.

    Read the article

  • Can I install fresh Linux accross partitions (LUKS & LVM) and preserve/use existing home user?

    - by xtian
    With an existing LUKS encrypted logical volume partitioned hard disk dual boot to Windoz and Linux (Fedora 15), is it necessary to "start over" with the LUKS setup when upgrading the system? I recall some note about dividing the Linux installation over different partitions would help to preserve the home data in future update (I can't find this now) Before I try it, is this possible and intended use case for partitioning a Linux installation? # lsblk -fa NAME FSTYPE LABEL MOUNTPOINT sda [80G] +-sda1 [system W95 FAT 32] vfat +-sda2 ext4 /boot +-sda3 [52.4G] crypto_LUKS +-luks-de25ac97-6a32-4b79-a6a0-296a39376b3b (dm-0) LVM2_member +-cryptVG-root (dm-1) [21.5G] ext4 / +-cryptVG-swap (dm-2) [5.4MB] swap [SWAP] +-cryptVG-data (dm-3) [25.6G] ext4 /home

    Read the article

  • How to explain DRM cannot work?

    - by jerryjvl
    I am looking for the shortest comprehensive way to explain to people that are trying to use DRM as a technology to prevent users from using their data in some fashion deemed undesirable, why their solution cannot work by definition. Ideally I'd like something that: Covers why technically it is impossible to have people access local data, but only in such-and-such a way Imparts an understanding of why this is, to avoid follow-on "But what if" rebuttals Is intuitive enough and short enough that even a politician (j/k) could grasp it When faced with this situation I try to be clear and concise, but I usually end up failing at least on one of these points. I'd really like to have a 'stock' answer that I can use in the future.

    Read the article

  • Hard disk failure. Can I recover my "move"d folders?

    - by Doug
    I am in the process of moving all my files from an old laptop to new one. I just moved 11gb of data from my old laptop to a hard drive (external) and then upon moving it out to the new hard drive, the hard drive is getting a CRC (Data Error (Cyclic Redundancy Check). Now I am looking for a solution to recover the files that I moved on my old laptop (not the external). I understand they they are just marked for potential overwriting to free up space. I was getting ready to test out GetDataBack, but it says to install it on a healthy windows and use the recover-needed drive as an external. However, I don't want to turn off my computer without first getting the okay since it is in a "moved" state. Please help! What can I do to recover the Moved files. I haven't touched the computer since it has been moved. What can I use to recover them?

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

< Previous Page | 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046  | Next Page >