Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1484/3281 | < Previous Page | 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491  | Next Page >

  • Text template or tool for documentation of computer configurations

    - by mjustin
    I regularly write and update technical documentation which will be used to set up a new virtual machine, or to have a lookup for system dependencies in networks with around 20-50 (server-side) computers. At the moment I use OpenOffice Writer with text tables, and create one document per intranet domain. To improve this documentation, I would like to collect some examples to identify areas where my documents can be improved, regarding general structure and content, to make it easy to read and use not only for me but also for technical staff, helpdesk etc. Are there simple text templates (for example for OpenOffice Writer) or tools (maybe database-driven) for structured documentation of a computer configuration? Such a template / tool should provide required and optional configuration sections, like 'operating system', 'installed services', 'mapped network drives', 'scheduled tasks', 'remote servers', 'logon user account', 'firewall settings', 'hard disk size' ... It is not so much low-level hardware docs but more infrastructure / integration information in these documents (no BIOS settings, MAC addresses).

    Read the article

  • Setting Up Local Repository with TortoiseSVN in Windows

    - by Teno
    I'm trying to set up a local repository so that all commitments are copied to the local destination, not a remote server. I followed this tutorial. What I did. Created a folder named "SVN_Repo" under C:\Documents and Settings[user-name]\My Documents\ Right clicked on the folder and chose TortoiseSVN -> Create repository here Clicked OK in the pop up dialog asking whether to create a directory structure. Created a folder named Repos for the local destination, under E:\ Right clicked on the SVN_Repo folder and chose SVN Checkout... Typed file:///E:\repos in the URL of repository field and clicked the OK button. What I got: Checkout from file:///E:/repos, revision HEAD, Fully recursive, Externals included Unable to connect to a repository at URL 'file:///E:/repos' Unable to open an ra_local session to URL Unable to open repository 'file:///E:/repos' I must be doing something wrong. Could somebody point it out? Thanks.

    Read the article

  • Debian is equal to Ubuntu

    - by rkmax
    The title of the question is confusing, and does not explain my point well. I've always used Ubuntu server from version 10.04 and never had problem, now I have 4 machines with ubuntu 12.04.1 LTS installed on them and I found that under any circumstances where there is a high burden throws me a problem and machine crashes constantly. the most common is CPU#X stuck for Ns! Now I wonder if the administration of Debian is equal to that of ubuntu, regarding Servicos, packages, folders structure for example I would like to know if the services are installed in the same manner using invoke-rc.d, which handles additional security, including for not giving blind caning. I've been looking for a comparison chart but have not found anything yet, something between Debian 6.0.6 and Ubuntu 12.04 also the most common "hiccups" when you install the system

    Read the article

  • Are less domains better than more domains in active directory?

    - by johnny
    A colleague of mine wants to add a domain to our forest. He said it would be good for security. I believe him but I have no idea why it is any better than with just one domain. I read this on Wikipedia but it has no source: "Microsoft recommends as few domains as possible in Active Directory and a reliance on OUs to produce structure and improve the implementation of policies and administration." I have no idea if it's right or not. I was hoping for comments. Thank you.

    Read the article

  • GUI session from Mac to Linux, over WAN

    - by kellogs
    Closest thing I could find here was this I am on Mac OS 10.5.6 with X server installed. This is the machine I am trying to get GUI session data onto. There is an Ubuntu 11.10 Linux on which I have installed an X server and GDM. This is the machine where the GUI session data should come from. Currently, I got to the point where Linux listenes on port TCP 6000 for its clients. 1 - how do I swap port 6000 for port 6767 ? 2 - how do I connect to 6767 from my Mac ? Thanks

    Read the article

  • Setting up scripts in Amazon EC2 Cloud

    - by racket99
    Hello, I am currently running a few perl and python scripts on a windows pc and would like to port over to the Amazon EC2 servers running 64-bit LINUX. The scripts are basic web scrapers that go to a variety of websites, get data and then save daily as csv files. I would like to install these in the cloud and get them running in an automated way so that they will run without my intervention. Also given that I don't want to lose all the data if the instance crashes, I should also upload the csv files to Amazon S3. Any idea how I can do this? I am not terribly versed in LINUX nor do I know Perl/Python well. What is the best way for me to tackle thi

    Read the article

  • HTACCESS Rewrite problem

    - by TiuTalk
    I have this folder structure: /var/www/mysite/abc/ /var/www/mysite/def/ /var/www/mysite/fgh/ /var/www/mysite/ijk/ /var/www/mysite/portal/ /var/www/mysite/wyz/ Today's my server is redirecting all requests of www.mydomain.com to www.mydomain.com/portal and that's ok... But I want to modify this behavior and keep www.mydomain.com acessing the /var/www/mysite/portal/ folder on background while www.mydomain.com/abc/ still acessing the /var/www/mysite/abc/ folder as before. That's what i've tried without sucess on my htaccess: IndexIgnore * Options +FollowSymlinks <IfModule mod_rewrite.c> RewriteEngine On #This doesn't work!!! #RewriteCond %{REQUEST_URI} !^abc #Neither this :\ #RewriteCond %{REQUEST_URI} !^/abc RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^$ /portal/index.php [L] RewriteRule . /portal/index.php [L] </IfModule> All requests keep goin to www.mydomain.com/portal/ when I use this .htaccess (with or without the commented lines)

    Read the article

  • Dell PowerEdge 6850 Degraded HDD

    - by Matt
    Good Morning, We have a dell power edge 6850 with a degraded drive in the RAID array. I have never had to recover such an issue, so any help or advice would be welcome. Basically it hasn't affected the server at an operating system level, but has slowed down performance, I have a replacement drive in hand but as this is our main database server I want to proceed with extreme caution. My options as I see them are - Can I just hot swap the degraded drive with the new one and the data will automatically re-sync and we are all back to normal presumably this is dependant on the current raid configuration? reading various comments on-line I may need to re-configure the RAID array and re-build the broken drive? This screams disaster to me with the main worry being that I wipe any other data. Option 1 would of course make my day. Thanks in advance

    Read the article

  • Optimal Disk Setup for OLTP SQL Server

    - by Chris
    We have a high transaction (lots of reads and writes) database server (running SQL 2005) that is currently set up with a RAID 1 OS partition (C:) and a RAID 5 data/log/tempdb partition (D:). The C: has 2 drives and the D: has 4 drives. The server has around 300 databases ranging from 10MB to 2GB in size. I have been reading up on best practices for partioning the disks, but would like some opinions on our setup since we are so limited in the number of disks. It seems like RAID 10 is popular, but I dont think we could use it with only 6 total disks to work with. Thanks. Update I went with 3 RAID 1 Partitions (2 disks each) Partition 1: OS, TempDB, Backups Partition 2: Logs Partition 3: Data

    Read the article

  • What is a good partitioning design/scheme for a multi-boot *nix system?

    - by static
    I'm planning to install Debian on my server. I would like to design the partitioning scheme in such a way, that I could install one or more other *nix distributives on that. So, reading many articles I think this scheme could be a good one for the initial idea of multi-boot: /grub /swap /LVM VG1 (for OS1) -> /boot (LV1) / (LV2) /tmp (LV3) /var ... /var/log /home /LVM VG2 (for OS2) -> /boot / /tmp /var /var/log /home ... (other distros) /LVM VG0 (for data) -> /data (LV1) But I'm confused a little bit now: what should be the labels for these partitions (unique or not) and what should be the mounting points looking as (/home (OS1) mounted to /home as well as /home (OS2)...)?

    Read the article

  • ubuntu fails to start

    - by miccaman
    I have a laptop with ubuntu 9.10 which fails to start, and I want to copy the data from it to an external hard disk. I can login in recovery mode command line, but then I cannot mount the external hard drive. (in recovery mode I cannot write to the laptops hard drive) If I boot from an portable USB with mintlinux, I can mount the external harddrive, and copy most of the data from the laptop, however there is a dir which I have no rights to access under /home/user/Documents then I get a permission denied error. Are there any other options?

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • What is the easiest way to do a direct file transfer of an extremely large file over the Internet?

    - by Kenneth Cochran
    I would like to transfer a 20+ GB file to a friend. I would like it to: Be fast Ensure data integrity Not require opening ports in either end's firewall Be free Not broadcast the file's existence to everyone on the Internet I've looked a several technologies and nothing seems to fit: Gnutella, BitTorrent, et al. satisfies 1, 2 and 4 JetBytes... 1, 3, 4 and 5 Yahoo Messenger, AIM, etc. 3, 4 and 5 FTP, SFTP... 1?, 4 and 5 rsync... 1, 2, 4 and 5 For a file this size speed and data integrity are the most important. No one wants a 20 GB file to fail a MD5 check after spending two days downloading it. Is there anything that meets all these requirements?

    Read the article

  • Requiring mulitple group membership in order to access folder

    - by David
    How would I go about creating a file or folder that requires a user to be a member of two or more different groups in order to read/write to the folder? For example, say I run an auto repair shop, and I have a folder called "Repair History" and I only want people to access it if they are members of BOTH the "Mechanics" and "Cashiers" group? This would be an AND requirment instead of an OR requirement which seems to be the norm. I know we can create a separate group that is needed to access the folder, but this is more of an academic question, since it pertains to a different security structure that we are creating. I'm not sure if MS security handles it, but I'm wondering how it would be done either way.

    Read the article

  • Converting NTFS to ZFS (or other)

    - by NumberFour
    Are there any benefits of converting HDDs that are running NTFS on a Linux machine to ZFS? Is there a way to do such conversion in Linux without losing the data? What about the stability of ZFS on Linux, does FUSE really work well in this case? People say that the only way to get the real full ZFS support is to install Solaris. I understand that the best choice for Linux would be ext4, but I really havent found a way how to convert to ext4 from NTFS without sacrificing all the data. On the other hand I have doubts whether changing from NTFS to ZFS while using Linux is really wise. Thanks for any tips.

    Read the article

  • Need an alerting system if my cloning script fails

    - by rahum
    I've configured a nightly rsync to mirror one server to a standby offsite backup server. The total datastore on the primary is 1.5TB. In the course of getting this working, I ran into numerous instabilities with the environment, which I seem to have sorted out, but even though it's now working, I am still nervous. This is intended to be a disaster-scenario standby server, and if disaster strikes and the standby does not have all the proper data synchronized, I'm out of a job. Thus, I want to script a system that will confirm, after each nightly sync, that the destination data matches the source. I realize that rsync does this, but if rsync doesn't complete fully (which was happening during the setup troubleshooting), I need to know. Any suggestions? I'm best with Ruby, if that is relevant for the solution.

    Read the article

  • How do I make a partition usable in windows 7 after power loss?

    - by user1306322
    A few days ago I was installing some software and power went down. When I rebooted, the partition to which the software was installed was not accessible. Disk manager shows that it's there, but doesn't show type, if it's healthy and gives me an error when trying to read its properties. The problem seems to be common after power loss, people recommend solving it by assigning a letter to the partition via DiskPart utility, but partition isn't listed in my case. I can access that partition with bootable OSs (like bootable Ubuntu or winXP) and all the files are there, but another installation of Windows 7 gives me the same results as the original. I could just copy all data to another disk if there was enough space, but unfortunately the size of partition I'm having problems with is 1.1TB. How do I regain access to the partition in my original Windows 7 installation without losing any data?

    Read the article

  • Which software to use for RAMDISK on Windows 2008?

    - by Tony_Henrich
    I am building a server machine with lots of RAM. At least 16G. I am planning to put my frequently read and written data in RAM so I am looking for software for creating RAM disks. This is for Windows Server 2008 R2 Standard 64bit. Any recommendations? I would like one where I can flush the disk image into persistent storage upon demand. For example when Windows shuts down. (I am aware of all the consequences of data loss when power is lost)

    Read the article

  • How to restore one contact from Address Book with Time Machine

    - by doekman
    I want to restore one contact from my Address Book with Time Machine. To do so, I select the contact in Address Book. Then, I press the Time Machine icon in the dock. Then my address book is "taken into space". However, when I browse back in time (either pressing the arrow back, or selecting a time on the right), the contact details do not change. And I am sure the data has been changed between dates. Also, when I do press restore, it's still the new data, not the backup. Is this a bug, or am I doing something wrong? I'm using OS X 10.6.3 in combination with a external USB drive on an iMac.

    Read the article

  • Best way to monitor host

    - by Axle
    I have just set up a host which receives messages from 300b to 1500b (wrapped stx etx)and replies with the same. It works fine but some times it receives junk data. Is there anyway to monitor this out of band data just so we can make sure we are not receiving massive amounts of it. Also is it possible to monitor if connections time out - where the host did not reply in time or long connections where it takes the host 20 seconds to reply when it normally takes 5. I am aware of IP monitor but I don't think it covers enough - Is there anything else or any other way? Thanks in advance!

    Read the article

  • Setting up Gitosis, where to create the repos?

    - by ReynierPM
    I'm trying to setup Gitosis on CentOS 6.2 but have some doubts/problems about it. I read this docs here, here and here but it's unclear to me where to configure where repositories are created. My server has a partition /data where I create a directory and called /gitrepos. I want all the repos created under that directory. By default if I run the command: gitosis-init < /home/reynierpm/reynierpm.pub I get this Initialized empty Git repository in /root/repositories/gitosis-admin.git/ Reinitialized existing Git repository in /root/repositories/gitosis-admin.git/ And I want this repos created under /data/gitrepos, any help? Thanks in advance

    Read the article

  • How to set up a PC which can be booted from Linux AND Windows?

    - by Martin
    Our PC was running Windows XP up to know. It has become incredibly slow and I'm considering switching to Linux (Ubuntu?!) as a fresh OS. However, there are some applications we rarely use which run only on Windows and I also want to have the possibility to easily go back to the old system, if I should find during testing linux, that anything is missing or not available. So the idea is to install Linux on a new (second) hard drive and use the existing Windows XP from a virtual machine (converted by Paragon Drive Backup) in the transition time. We have a lot of data on the PC, tens of GBs of Photos (managed by Picasa), ... My questions: What could be the best way to setup the new hard drive? (Partitions) I assume that I can not access the Linux data from Windows but I could access (read/write) windows drives from Linux? Does anyone know good tutorials for this use case? What other things might I have to consider for transition Windows-Linux?

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • Avoid cache overflow in Atempo LiveBackup

    - by Vebjorn Ljosa
    When attempting the initial backup of a new client, Atempo LiveBackup seems to require a very large cache. For instance, a 20 GB cache is not enough to back up a computer that has 100 GB of data. It appears that LiveBackup is adding new files to the cache at a faster rate than it can send them to the server. When the cache fills up, the backup fails. Aside from removing most data from the computer and then add them back gradually after the initial backup, is there a good workaround? Is it possible to make LiveBackup slow down its scan so as to not fill the cache? Or is it possible to place the cache on an external drive?

    Read the article

  • Suggestions? Password & Encrypted Read/Write File like a Mac (.dmg or .SparseBundle) also R/W on Windows, Ubuntu

    - by Jeff Drew
    For years I have used .dmg or .sparsebundle (Encrypted and Password Protected) to safely keep home directory backups on my Mac. Now, I am looking for a similar Full Permissions/Read/Write that maintains an encrypted, and password protected file that it Tri-Platform. I'd like to have the future ability to use it on Mac OS X, Windows 7/8, and Ubuntu (current releases+). I appreciate your recommendations. Thank you. (I like mounting a DMG and having a file directory structure that can be easily maintained and organized. When done, un-mounting the file.) (I've seen Windows tools to open encrypted DMG files? and I will explore these options, but with the desire to also keep the file accessible on on three OSes, someone might have additional suggestions.)

    Read the article

< Previous Page | 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491  | Next Page >