Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1491/3281 | < Previous Page | 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498  | Next Page >

  • Force RAID to read exiled disk?

    - by user198847
    user user197015 asked on 1th of November the following question: "We have a RAID 6 array (Infortrend EonStor DS S16F) that recently had two disks fail. Immediately prior to replacing these two disks, a third, good, disk was accidentally ejected from the array. After reinserting this disk it is marked as "exiled" by the array's firmware, and so even after replacing the two failed disks with new ones the array refuses to rebuild the logical volume and remains inaccessible. Since the temporarily-ejected disk is still functional and nothing has been written to the array since it was ejected, it seems that it should theoretically be possible to recover all the data on the array, but how can we convince the array to use the data from the "exiled" disk? Thanks for any help or advice you can offer." Now I've got quite the same problem. The post has been deleted by the user, so I don#t know if he was successful. Is there anybody who can help me? Thank you!

    Read the article

  • Most secure way to access my home Linux server while I am on the road? Specialized solution wanted

    - by Ace Paus
    I think many people may be in my situation. I travel on business with a laptop. And I need secure access to files from the office (which in my case is my home). The short version of my question: How can I make SSH/SFTP really secure when only one person needs to connect to the server from one laptop? In this situation, what special steps would make it almost impossible for anyone else to get online access to the server? A lot more details: I use Ubuntu Linux on both my laptop (KDE) and my home/office server. Connectivity is not a problem. I can tether to my phone's connection if needed. I need access to a large number of files (around 300 GB). I don't need all of them at once, but I don't know in advance which files I might need. These files contain confidential client info and personal info such as credit card numbers, so they must be secure. Given this, I don't want store all these files on Dropbox or Amazon AWS, or similar. I couldn't justify that cost anyway (Dropbox don't even publish prices for plans above 100 GB, and security is a concern). However, I am willing to spend some money on a proper solution. A VPN service, for example, might be part of the solution? Or other commercial services? I've heard about PogoPlug, but I don't know if there is a similar service that might address my security concerns? I could copy all my files to my laptop because it has the space. But then I have to sync between my home computer and my laptop and I found in the past that I'm not very good about doing this. And if my laptop is lost or stolen, my data would be on it. The laptop drive is an SSD and encryption solutions for SSD drives are not good. Therefore, it seems best to keep all my data on my Linux file server (which is safe at home). Is that a reasonable conclusion, or is anything connected to the Internet such a risk that I should just copy the data to the laptop (and maybe replace the SSD with an HDD, which reduces battery life and performance)? I view the risks of losing a laptop to be higher. I am not an obvious hacking target online. My home broadband is cable Internet, and it seems very reliable. So I want to know the best (reasonable) way to securely access my data (from my laptop) while on the road. I only need to access it from this one computer, although I may connect from either my phone's 3G/4G or via WiFi or some client's broadband, etc. So I won't know in advance which IP address I'll have. I am leaning toward a solution based on SSH and SFTP (or similar). SSH/SFTP would provided about all the functionality I anticipate needing. I would like to use SFTP and Dolphin to browse and download files. I'll use SSH and the terminal for anything else. My Linux file server is set up with OpenSSH. I think I have SSH relatively secured. I'm using Denyhosts too. But I want to go several steps further. I want to get the chances that anyone can get into my server as close to zero as possible while still allowing me to get access from the road. I'm not a sysadmin or programmer or real "superuser". I have to spend most of my time doing other things. I've heard about "port knocking" but I have never used it and I don't know how to implement it (although I'm willing to learn). I have already read a number of articles with titles such as: Top 20 OpenSSH Server Best Security Practices 20 Linux Server Hardening Security Tips Debian Linux Stop SSH User Hacking / Cracking Attacks with DenyHosts Software more... I have not implemented every single thing I've read about. I probably can't do that. But maybe there is something even better I can do in my situation because I only need access from a single laptop. I'm just one user. My server does not need to be accessible to the general public. Given all these facts, I'm hoping I can get some suggestions here that are within my capability to implement and that leverage these facts to create a great deal better security than general purpose suggestions in the articles above.

    Read the article

  • Additional Hard Drives for Servers

    - by Abs
    Hello all, I am developing a web app where I will have to save lots of files and I am just trying to work out the directory structure and where things should be saved to. I have had a look at the dedicated server I want to buy and for storage it shows this: 2x 1TB SATA in RAID1 The space is enough but I am guessing this will not be on one hard drive? I will have to save files on one hard drive and when that fills up, I have to use the other? For the Fedora distro - what is the path for the second drive? Is there a primary drive where I will be able to setup my webroot? I am sorry, this is all new to me. It would be great to links and advice on how things actually work when it comes to additional hard drives etc. Thanks all

    Read the article

  • What is the proper way to set up the Apache document root in terms of privileges?

    - by racl101
    I have just installed Ubuntu 9.10 server edition on my machine and I wish to run my own personal local server with other users in the same LAN. First, I was wondering what folder directory structure is best for the web root? Should I just use: /var/www/ and start throwing web documents there or should I create a folder elsewhere (maybe the home directory)? Second, in the /var/www/ directory only the root user can create documents in there, however, I wish to have other users be able to create files in the document root and upload them via FTP. Should I change the permissions or the www/ folder? Or again, should I create the document root elsewhere with different permissions? What is the safest way of doing this?

    Read the article

  • Can I use Veritias Storage Manager to provide HA storage using server-local storage?

    - by Paul
    I have a need to provide an high-availability ftp/http file repository. Upload will happne to one server, but the uploaded file must be immediately visisble on all other servers I can handle the failover of the servers themeselves using load balancers. But in the event of failure of one server, the other servers must see the same contents of the repository. Normally, I'd use a SAN for this, but in this case the data centre standards do not allow SAN/external storage - all storage will be local to the servers. Cam I use Veritas Storage Manager (or any other product) to manage mirroring hte contents between servers in this way? Or does that require a SAN? I couldn't tell either way from a quick look at the data sheets etc.

    Read the article

  • What are the recognized ways to increase the size of the RAID array online/offline?

    - by user149509
    Is it possible, in theory, increase the size of the RAID-array of any level just by adding new drive(s)? Variant like "backup whole data - delete old array - add/replace disks - create new array - restore data" is obvious so what are the other options? Does it depend on the RAID-level only or on the implementation of RAID-controller only, or on both? Adding new disks to a striped array necessarily leads to a rebuilding of the array with the redistribution of the strips to the new drives? What steps should be done to increase size of RAID-array in online/offline scenarios? Especially interesting RAID-5 and RAID-10. I would like to see the big picture.

    Read the article

  • Moving cpanel backup of magento site to VPS

    - by user2564024
    I was having my site in shared hosting, I took the entire backup, its structure is like addons homedir mysql resellerpackages suspendinfo bandwidth homedir_paths mysql.sql sds userconfig counters httpfiles mysql-timestamps sds2 userdata cp locale nobodyfiles shadow va cron logaholic pds shell vad digestshadow logs proftpdpasswd ssl version dnszones meta psql sslcerts vf domainkeys mm quota ssldomain fp mma resellerconfig sslkeys has_sslstorage mms resellerfeatures suspended Now I have subscribed to vps, I have copied the files inside homedir/public_html to var/www/html of my new hosting, but am seeing the following error when I view it browser, There has been an error processing your request Exception printing is disabled by default for security reasons. Error log record number: 259343920016 I have just created database with name magenhto inside mysql. Previously I had cpanel and used one click installer. Hence am not aware of how to use that data inside mysql to this new system and are there any more changes.

    Read the article

  • Migrating users and IIS settings from a workgroup win2k3 machine to a new win2k8r2

    - by amber
    I am retiring my old Windows Server 2003 Standard 32bit machine to a new machine with Windows Server 2008 R2 Standard. The two sticking points are migrating user accounts (and there are a lot of them) and IIS settings/websites (again, there are a lot). The new machine has not been provisioned yet. I'm at that point where I'm about install the OS on it. The old machibe is configured with a mirrored set for its OS and data partitions. I have broken the mirror set, replicated all of the data to an external drive, and then rebuilt the mirror set. In short, I have an image of the old machine to play with while safely leaving it up and running. Thanks!

    Read the article

  • Viewing zip archive contents using 'less' on OS X.

    - by multihead
    I couldn't help but notice that the 'less' program on all of the recent distributions of Linux that I've used (Ubuntu and Gentoo in this case) allow me to view the contents of ZIP and TAR archives, while the install of 'less' that I have on OS X (and Solaris) instead produce a "foo.zip may be a binary file. See it anyway?", which proceeds to spit out the raw binary data instead of a nice file structure listing. Google has not produced much in the way of helpful results -- it's tricky to search for 'less' in this context. I downloaded and built the latest version from greenwoodsoftware.com, but even it refuses to show the contents of these archives. I didn't come across any related configure/build options either. Any ideas? Thanks!

    Read the article

  • Need help automating a task in Linux

    - by Niphoet
    I'm still kind of new to Linux, but here's what I'm trying to do. I need to copy all subdirectories and files from one directory to another ever 5 minutes or so, with the old data automatically being overwritten with the new data. I'd also like this to run at startup. Is there any way this can be done? If so, what program would I need to schedule the automation and what is the command line I would need (cp ???). Thanks in advance!

    Read the article

  • One huge drive (network share) from many computers, with folder priority redundancy.

    - by Exception Duck
    Not sure if this exists, but I have a huge amount of data to store (about 5-50mb files) and as it is now, I have 5 computers, each with raid 5 providing about 6TB hard drive each. This is causing some problems with the software I am using (something home made) so I'm wondering, is there some software that I can install on all those computers that will mask it as one huge drive... Running windows on those computers, from Xp 64 bit to windows server 2008 I would also like to set a priority on each folder on the redundancy it has, some folders I can live without no online backup (I have a backup in a safe of that data) but some I need full online backup system if one hard drive fails. Something open source, as I try to use that as much as I can, but all ideas welcome.

    Read the article

  • My Quicken 401(K) account has changed to Checking. How do I fix this?

    - by user36492
    This is actually the second time this has happened to me, but I don't remember what I did last time (nor can I find the original forum post that helped then). I'm using Quicken Mac 2007. My 401(k) account, previously properly set up, has changed, seemingly irrevocably, to a Checking account. When I click "Edit" and try to change the account type, the 401(k) option is grayed out. I've got years of data stored in this account, so I am really hoping there's a way to salvage this data file!

    Read the article

  • Seeking recommendations on resolving sporadic network connectivity latency for Notes client

    - by Russell Maher
    I have Domino servers in geographically disperse data centers in the U.S. Sometimes when I open an NSF on one of those servers the connection times out then when I open the NSF again it connects immediately. This has been going on for years and during that time I have upgraded and changed my own internet connection and moved servers to different data centers. Of course I have direct connection documents using fixed IP addresses. When I do a Notes client Trace nothing is out of the ordinary. My business partner experiences the same thing from an entirely different city and different ISP but to the same servers. Never have any trouble connecting to the HTTP server, just over port 1352. Does anyone have any recommendations on a process to determine what is causing this problem?

    Read the article

  • Out of nowhere, ssh_exchange_identification: Connection closed by remote host

    - by disusered
    I am running Ubuntu 10.10 on a remote box. I ssh to it everyday without issues but today out of the blue, I get the following error: ssh_exchange_identification: Connection closed by remote host If I connect with -vv, I get the following: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/bla/.ssh/config debug1: Applying options for ubuntu-server debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ubuntu-server.com [123.123.123.123] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/bla/.ssh/id_rsa type -1 debug1: identity file /Users/bla/.ssh/id_rsa-cert type -1 ssh_exchange_identification: Connection closed by remote host If I remove the key, I get the exact same output (sans "debug2: key_type_...). I've managed to log in physically and checked my hosts.allow and hosts.deny but they have no entries. I tried removing and reinstalling OpenSSH, checked authorized_keys and ~/.ssh permissions and tried connecting from other computers only to get the same error. I'm at my wits end, any help would be greatly appreciated.

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • How to restrict deletion of a folder on NTFS share, but still allow modify access within folder

    - by thinkdreams
    I am setting up a set of scan folders from a scanning copier device, and would like to know the best way to protect the folders (for each department) from moving or deletion, but yet still allow access for the users to modify (i.e. create/add/delete) the scanned files within the folder. Structure is: Share Name Departmental Folder User files The writing of the files initially is taken care of by a service account which has full control. We'd just like to ensure the users cannot accidentally delete the folder (which has already happened) containing all the files, etc. This is for a Windows 2003 server, NTFS permissions. Suggestions would be most appreciated.

    Read the article

  • CPU/RAM usage log over a period of time to file on CentOS

    - by joel_gil
    Hi everyone Im looking for an app pr line of code that could let me observe a process, save the info in a number of variable and then put the gathered info on a file. Ive been trying with variations of top but no luck. I am running several CentOS virtual servers, VM is 2gb ram 2 processor. Maybe a script that works over a specified amount of time while writing lines with the info on a text file so at the end i can have a sort of table with the data. The thing is Im going to stress test the server and I would like to have the data to make some statistics. Any comments and suggestions are most welcome.

    Read the article

  • Turn off write barriers on ext4 whiche FS is mounted

    - by user462982
    I am doing some IO intensive DB imports that run for several days now and the IO performance has dropped tremendously over times. The DB data files (log files) are on an ext4 formatted logical volume which is mounted with default options (did not specify something special in fstab). Since I just learned that ext4 enables write barriers by default: Q: Is there some way to disable write barriers online (i.e. while the file system is in use), because I cannot interrupt the import and don't want to restart it again. I am aware that write barriers might not be the only thing impeding performance it is a bad idea to have write barriers disabled on journalling file systems if data safty is important (e.g. on a production system)

    Read the article

  • Ultimate way to use Picasa in a home network

    - by luisfarzati
    I've been trying a lot of approaches but still didn't find any effective solution. I want gigs of photos in a network drive (a IOMega Home Media Network Drive, plugged to my wifi router). I'd like to do 2 things: Do a Picasa import process of all the photos in the drive, making Picasa organize all the files in a year/month folder structure physically. Ideally, the import target directory should be the same network drive, otherwise I should move all the imported files in my local computer back to the drive myself. Share the Picasa database over the network, by uploading it to the network drive. Have me and other members of the family point our Picasas to the network database, and see the photos as well as make changes (tag faces, create logical albums, etc) into it. Is ANY possibility to accomplish this? Or should I be looking for another photo management app, and in that case do you know such one? Thank you!

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • Reading a file from an alternate location

    - by Highstaker
    I have a certain file (data.abc) located in, say, my home folder. I make a copy of it to another location (for example, "/mnt/ramtemp/"). Whenever the file in my home folder is accessed by any process, I want it to be read not from home folder, but from "/mnt/ramtemp/". As you might have guessed from the path of the latter, it is where I mount the ramfs. So, basically, I want a process to access not the file on my HDD (which is slower), but its copy on ramfs (which is way faster). At the same time, I want the file data.abc to remain in my home folder under that name, I don't want to rename or delete it. Is there any way I could guide the system to redirect the processes to read the file from alternative location whenever they try to read it from home folder?

    Read the article

  • Encrypt two drives with Truecrypt with password before boot

    - by Deshroom
    i'm using laptop and PC with single HDD with full disk encryption. I know how works truecrypt on single drive because i use it everyday. My second laptop has 2 HHDs. My question is how to encrypt first 128GB SSD and second 1TB HDD in same way. I have multiple applications installed on second drive so i want to have it accessible during boot in ex. Steam in installed on second drive and it starts with windows. How to do it? can i encrypt two drives in truecrypt and unlock it via password before boot? My main reason is i want to RMA laptop without removing disks or data - my data need to be encrypted. Thank you.

    Read the article

  • Will you install software on the same partition as Windows system?

    - by Tim
    I was wondering if you always install software on the same partition as Windows 7 system? What kinds of software do you install on the same partition as Windows system? What kinds of software you install on another partition? If you install software on another partition, do you install them on a dedicated partition to these software? Or do you install them on the same partition as data (personal data)? How do you plan the sizes for the partition(s) in either case? What are to consider when making plans about the above questions? The software I am installing include: Matlab, Mathematica, IDEs, compilers or Interpreters for C++, C, Java, R, Python, Perl, Lisp, Latex, and database. Mainly for programming and typesetting kinds of studies and projects.

    Read the article

  • TCP Sessions and IP Changes

    - by Kyle Brandt
    What happens to a TCP session when the IP of a client changes? I did a simple test of having netcat listen on a port, and connecting to that port from a client machine. I then changed the IP of the client while that nc session was open and sent some data, no data was received by server after changing the IP. I know they are different layers, but does TCP use IPs for part of how it distinguishes sessions? Does my example not work because of how the application handles it, or is this not working because of something happening at TCP/IP/Ethernet layers? Does this depend on the OS implementation? ( I am most interested in Linux at the moment)

    Read the article

  • Is there a way to create a copy-on-write copy of a directory?

    - by BCS
    I'm thinking of a situation where I would have something that creates a copy of a directory, tweaks a few files, and then does some processing on the result. This wold be done fairly often, maybe a few dozen times a day. (The exact use case is testing patch submissions; dupe the code, patch it, build/test/report/etc.) What I'm looking for could be done by creating a new directory structure and populating it with hard links from the origonal. However this only works if all the tools you use delete and recreate files rather than edit them in place. Is there a way to have the file system do copy-on-write for a file? Note: I'm aware that many FSs use COW at a block level (all updates are done via writes to new blocks) but this is not what I want.

    Read the article

< Previous Page | 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498  | Next Page >