Search Results

Search found 10077 results on 404 pages for 'techie db'.

Page 324/404 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • Need a recommendation for shared storage on auto-scaling ec2 w/ scalr

    - by john h.
    I have come across so many answers to this question that I am completely lost! I am moving our 2 sites to a load balanced ec2 system with scalr as our cloud manager. Now the question is coming up about persistent storage for the user's uploaded content and other files. Could someone please give me a suggestion and possible a link to a tutorial for the following setup and goals. 2 websites (1 Forum, 1 ecommerce). 1 LB 1 App server (to scale out to as many as needed) 1 DB server (to scale out to as many as needed) Our sites will need to autoscale and according to what I am learning about scalr, that means as new instances load up, I need to run a script to set the basics up on that server (git,php mods, pull site from git, move keys, etc) What I don't understand is how should I handle user uploaded content like profile pictures, avatars, product images, themes, etc... Do I mount an EBS or s3fs folder to hold the websites (maybe /var/www/websitefolder) or do I do something like mount the avatar folders /var/www/websitefolder/images/avatars) I am not sure where to go with this. Could someone give me some detailed help? -John

    Read the article

  • problems installing mysql and phpmyadmin to localhost

    - by Joel
    Hi guys, I know there have been many similar questions, but as far as I can tell, most of the other people have gotten further than I have... I'm trying to get a WAMP setup happening. I've got PHP and Apache running and talking to each other. PHP is in c:\PHP Apache is in it's default program files folder. mySQL is in it's default install location. I have localhost setup at D:\public_html\ I'm able to navigate to localhost and see html and php files. But I have a simple mySQL test file: <?php // hostname or ip of server (for local testing, localhost should work) $dbServer='localhost'; // username and password to log onto db server $dbUser='root'; $dbPass=''; // name of database $dbName='test'; $link = mysql_connect("$dbServer", "$dbUser", "$dbPass") or die("Could not connect"); print "Connected successfully<br>"; mysql_select_db("$dbName") or die("Could not select database"); print "Database selected successfully<br>"; // close connection mysql_close($link); ?> When I try and open this, I get "could not connect" Now, I haven't even created a database yet, because I can't log into mySQL with phpmyadmin-so I think I've done something wrong in my mySQL install because they aren't talking to each other. I guess my main question is how do I first create a database in mySQL to be sure I have even installed it correctly?

    Read the article

  • Can't install mysql 5.1 on a windows machine because the last install left artifacts.

    - by Zombies
    After uninstalling mysql 5.1 (64 bit version) I cannot install the win32 version! Apparently the devs felt it neccasery to leave helpful artifacts behind? I have rebooted my machine but no effect.. Running this: C:\Users\User1>net start mysql The MySQL service is starting. The MySQL service could not be started. A system error has occurred. System error 1067 has occurred. The process terminated unexpectedly. And ran this: C:\Program Files (x86)\MySQL\MySQL Server 5.1\bin>mysqld --console 100213 10:52:58 [Note] Plugin 'FEDERATED' is disabled. InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485760 bytes InnoDB: than specified in the .cnf file 0 25165824 bytes! 100213 10:52:59 [ERROR] Plugin 'InnoDB' init function returned error. 100213 10:52:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 100213 10:52:59 [ERROR] Unknown/unsupported table type: INNODB 100213 10:52:59 [ERROR] Aborting 100213 10:52:59 [Note] mysqld: Shutdown complete Update: For some reason it looks like it is installing the 32bit DB into the old 64bit directoy.... will look into this... (the bin directory is going into the 32 bit program files directory).

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • My DNS works! But, what is the simplest way to add something to it?

    - by Alex
    This is my current DNS example.com.db zone file. I followed a tutorial. It works, because when I point to this DNS from another server via resolve.conf, it will actually forward me to the right IP when I do "ping example.com". ; ; BIND data file for example.com ; $TTL 604800 @ IN SOA example.com. info.example.com. ( 2007011501 ; Serial 7200 ; Refresh 120 ; Retry 2419200 ; Expire 604800) ; Default TTL ; @ IN NS ns1.example.com. @ IN NS ns2.example.com. example.com. IN MX 10 mail.example.com. example.com. IN A 192.168.254.1 www IN CNAME example.com. mail IN A 192.168.254.1 ftp IN CNAME example.com. example.com. IN TXT "v=spf1 ip4:192.168.254.1 a mx ~all" mail IN TXT "v=spf1 a -all" Right now, ping example.com....goes to 192.168.254.1. That's great!!! it works! My question is--how can I add something do this file so that when my other servers: ping dbserver1....goes to 44.245.66.222 ping cacheserver1 ....goes to 38.221.44.555 I want to use it like a universal hosts file for my machines.

    Read the article

  • Should tripwire be entering /proc?

    - by dsadinoff
    When initializing the db with tripwire --init it spat out a bunch of errors pertaining to /proc: ### Warning: File system error. ### Filename: /proc/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fd/4 ### No such file or directory ### Continuing... ### Warning: File system error. ### Filename: /proc/16982/task/16982/fdinfo/4 ### No such file or directory ### Continuing... ### Warning: Duplicate object encountered. ### /proc/sys/net/ipv6/neigh This feels like noise. The twpol.txt file has the following clause: # # Critical devices # ( rulename = "Devices & Kernel information", severity = $(SIG_HI), ) { /dev -> $(Device) ; /proc -> $(Device) ; } Which, if I understand it right, is going to cause tripwire to care deeply about the entire contents of /proc. Shouldn't it just care about the static parts of /proc like the drivers and such, and not the per-pid stuff? Why does it ship like this?

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Exchange 2007 - Mailbox Database Recovery

    - by Phrontiste
    Exchange 2007 edb Can we restore Exchange edb (First storage group\mailbox database.edb) to another exchange server ? Do I just copy the edb to the new exchange server and delete the first storage group\mailbox database.edb and replace it with this one ? How can I get all the mailboxes from that (old) mailbox database.edb ? I had a exchange 2007 server with 10 mailboxes, I have installed exchange on another machine and was thinking if I can do the above ? or is there any way I can get all the mailboxes from that edb (old mailbox database) and import them into the new one. I have deleted the old exchange install I had (these are test machines) What are the steps required to get the DB working on the new machine ? Also, I am confused about the recovery storage group ? I can mount a mailbox database in recovery storage group, but when I try to get mailbox out of it, it won't match any thing ? can someone please assist in understanding RSG and how to restore the OLD mailbox database. thanks and regards Phrontiste

    Read the article

  • Organizing automatically Windows Files and Folders

    - by Kiquenet
    For Windows only, Organizing the eleventy-billion files you've got stuffed into folders on your hard drive is very "hard". For example, I have one folder on my computer that I save all web downloads to, regardless of file type, size or purpose. Many of the files are only temporary downloads, for instance setup files of applications that I test, demonstration videos that I watch once or documents that I want to read. Some files on the other hand are there to stay, and I used to move them out of the download folder manually in the past. Another files in folders in my computer: many source code, tests, programs, tools, ... I need tecnology for organize billion files. Which best tools for organize, sort, etc automatically your files-folders? Digital Janitor http://davidevitelaru.com/software/digital-janitor/ Belverede http://lifehacker.com/5510961/how-to-automatically-clean-and-organize-your-desktop-downloads-and-other-folders Download Mover http://www.neoteo.com/download-mover-reorganiza-tus-descargas-14188 File/Folder Date Organizer http://seedling.dcmembers.com/other/ffdorg.zip DropIt http://www.lupopensuite.com/db/dropit.htm Others issues about organization files, desktop, etc How to automate the process of organizing audio files on Windows Organizing My Windows Desktop What's a good way for organizing PDF documents on Windows? Folksonomy tagging for files What is your method of “folksonomy” tagging for files on your local machine?

    Read the article

  • Certificates required for WHQL-certified drivers

    - by Kasius
    The 64-bit Windows 7 image that we deploy to machines at our site does not contain all of the certificates included on a default Windows image. Automatic root certificate installation is also disabled per policy from higher in the organization. We have had a lot of trouble installing many WHQL-certified drivers from reputable companies (ex. HP, Lexmark, Dell, etc.), and I hypothesize that a required certificate is missing from one of the certificate stores on the machine. The error we typically get is: The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner. I know that it is signed. A .CAT file is included, and it has the following tree from top to bottom: Microsoft Root Authority (thumbprint a4 34 89 15 9a 52 0f 0d 93 d0 32 cc af 37 e7 fe 20 a8 b4 19) Microsoft Windows Hardware Compatibility PCA (thumbprint 93 b8 d8 82 0a 32 db 20 a5 ea b6 8d 86 ad 67 8e fa 14 ea 41) Microsoft Windows Hardware Compatibility Publisher (thumprint b0 50 45 45 42 4e be 2c 16 2f 62 5b bf 5a e6 9b 96 bf 0b 0b) What certificates are required to install WHQL-certified drivers? Is it possibly something other than certificates? Thanks! NOTE: I have posted this question on Technet as well, but honestly, I've never had a lot of luck posting questions on the Technet forums.

    Read the article

  • MongoDB data directory transfer and upgrade

    - by KPL
    I just transferred my data directory (of Mongo 1.6.5) to a new server and installed Mongo 2.0 on it. I set the data directory path and did sudo server mongod restart. It failed, and the log file output says this - ***** SERVER RESTARTED ***** Sun Oct 9 07:51:47 [initandlisten] MongoDB starting : pid=8224 port=27017 dbpath=/database/mongodb 64-bit host=domU-12-31-39-09-35-81 Sun Oct 9 07:51:47 [initandlisten] db version v2.0.0, pdfile version 4.5 Sun Oct 9 07:51:47 [initandlisten] git version: 695c67dff0ffc361b8568a13366f027caa406222 Sun Oct 9 07:51:47 [initandlisten] build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 Sun Oct 9 07:51:47 [initandlisten] options: { auth: "true", config: "/etc/mongod.conf", dbpath: "/database/mongodb", fork: "true", logappend: "true", logpath: "/var/log/mongo/mongod.log", nojournal: "true" } Sun Oct 9 07:51:47 [initandlisten] couldn't open /database/mongodb/local.ns errno:1 Operation not permitted Sun Oct 9 07:51:47 [initandlisten] error couldn't open file /database/mongodb/local.ns terminating Sun Oct 9 07:51:47 dbexit: Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close listening sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to flush diaglog... Sun Oct 9 07:51:47 [initandlisten] shutdown: going to close sockets... Sun Oct 9 07:51:47 [initandlisten] shutdown: waiting for fs preallocator... Sun Oct 9 07:51:47 [initandlisten] shutdown: closing all files... Sun Oct 9 07:51:47 [initandlisten] closeAllFiles() finished Sun Oct 9 07:51:47 [initandlisten] shutdown: removing fs lock... Sun Oct 9 07:51:47 dbexit: really exiting now I have already run it with --upgrade once.

    Read the article

  • Why does a document in Word 2007 stop recognizing the mouse after the document loses focus?

    - by alt234
    When I open a document in Word 2007, everything works fine, I can edit, highlight text, etc. However, the instant Word loses focus, when I focus back the document doesn't recognize anything the mouse does. The tabbed menu at the top seems to recognize the mouse but the document itself does not. I can scroll through via the scroll-wheel and I can type. However, typing just shows up where the mouse cursor last was before focus was taken away. I've tried clearing some word data registry keys. I've also found that some Word Add-ins can cause problems. LaserFiche is one I see mentioned a lot. As far as I can tell I have no add-ons though. Any ideas? It's crazy-annoying. UPDATE- - Word is the only program that has this problem - Typically I have Toad (Oracle DB management app), an XP virtual machine with various apps running on it, Skype, Google Talk, and maybe a handful of other programs at any given time open... Windows Media player, Outlook. - Yes, this happens even if nothing else is running. From a fresh restart as well. - I'm running Vista 64 with SP1 - According to Windows Update, I have the latest of everything. This has been happening for a couple of months now. Just never took the time to look into because I usually never have to use word.

    Read the article

  • Odd log entries when starting up PotgreSQL

    - by Shadow
    When restarting pgSQL, I get the following log entries: 2010-02-10 16:08:05 EST LOG: received smart shutdown request 2010-02-10 16:08:05 EST LOG: autovacuum launcher shutting down 2010-02-10 16:08:05 EST LOG: shutting down 2010-02-10 16:08:05 EST LOG: database system is shut down 2010-02-10 16:08:07 EST LOG: database system was shut down at 2010-02-10 16:08:05 EST 2010-02-10 16:08:07 EST LOG: autovacuum launcher started 2010-02-10 16:08:07 EST LOG: database system is ready to accept connections 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST LOG: incomplete startup packet 2010-02-10 16:08:07 EST LOG: connection received: host=[local] 2010-02-10 16:08:07 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:08 EST LOG: connection received: host=[local] 2010-02-10 16:08:08 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:09 EST LOG: connection received: host=[local] 2010-02-10 16:08:09 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:10 EST LOG: connection received: host=[local] 2010-02-10 16:08:10 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:11 EST LOG: connection received: host=[local] 2010-02-10 16:08:11 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST FATAL: password authentication failed for user "postgres" 2010-02-10 16:08:12 EST LOG: connection received: host=[local] 2010-02-10 16:08:12 EST LOG: incomplete startup packet My question regarding a potential consequence of this is posted here: http://stackoverflow.com/questions/2238954/mdb2-says-connection-failed-db-logs-say-otherwise , but I didn't realize this was happening when I asked that question, and I figured this [part of the] problem is for SF. Edit: I can connect to the database and manipulate things normally with the psql CLI and the postgres user.

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Servers / ram for social network- how many?

    - by Marty
    I am launching my social network soon an looking into hosting. The question i am lost is: Do i need separate servers for web vs database vs image handling since there is photo sharing? Or does 1 server handle it all? Also is more ram better? If i get 50GB ram is that better than having 8 gb ram? EDIT: It is PHP codeignitor and MySQL for now. (switch to NoSQL DB later if demand calls fr it.) I will be using memcache also. Concept wise it is similar to yelp, so geographic based with lots of user content and image sharing + live feeds an privacy levels. User plan is open question. Without testing the demand for this i cant give a number. But the concept is unique, no one out there with the set of features i am releasing so it could grow. Ideally I want to plan for handling about 1-2 million views / month from launch. If it goes more than that then I will upgrade.

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • On Windows machines, what is the typical toolchain for remote maintenance?

    - by Hanno Fietz
    I need to deploy PHP and Python code and the appropriate environment (web server, db server) to remote Windows systems, and I don't know what toolchain would be the equivalent to ssh, scp, bash and the like. So, basically, what I need to be able to do is the following: access remote Windows with the appropriate privileges in a secure manner, like I routinely do with ssh (I don't even know whether that would be a text or graphic interface on Windows). remotely install software: Apache or IIS, MySQL or Postgres, Python or PHP copy files from remote (the application we're deploying) remotely configure the machine to run regular tasks (e. g. checking for updates to the application) automate tasks like downloading files from a designated place The main question is probably how I get onto the machine securely in the first place, and then the rest is general Windows admin knowledge, which probably is too broad a scope to fit into one question. I have years of experience with maintaining Linux boxes and I have used tools of varying sophistication on those, ranging from plain scping of PHP files to deployment of Java application containers and even full VMs with Vagrant. On Windows, I'm a complete noob, and I don't even know where to start. I have installed Apache, MySQL , PHP on a desktop machine maybe twice in my life, that's about it. Bonus points for things that work from a Linux machine at my end, but I could run a VM and do everything from there.

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • overriding default scheduler for blkio requests in cgroups

    - by Aamir Mushtaq
    I am trying to optimize a set of servers that have to reside on single machine. i.e. i can have multiple application server, a DB server and of course a samba server as well in same instance. Now I was looking into several optimizing options available to me. In my quest, i did my tuning of the network stack. coming to the CPU, MEMORY and the BLKIO tweaks, i am using CGROUPS. The problem i am facing is that for enhanced performance in the nature of the applications that i am running, the CFQ Scheduler that is implemented for the BLKIO subsystem is not optimal. I was looking more for a Deadline Scheduler because that will serve my purpose well. My question is whether it is possible for us to change the scheduler in the kernel compilation itself for the BLKIO to Deadline and it will reflect in my usage of [CGROUP hierarchies][3]? Since when running the service cgconf, a new fs is mounted and i dont want it to revert to CFQ scheduler. I also welcome any suggestions that will enable me to have more control over my resources.

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • Migrate Maildir between courier and dovecot servers

    - by DeaconDesperado
    I have several tarballs that make up all the previous emails for two or three accounts on a mail server. This machine we be shut down within a few weeks and so I need to migrate all the previously subscribed IMAP folders to the new server. The old machine ran Dovecot with exim and delivered all mail to a virtual user folder on the server in maildir format. The new machine uses courier and postfix, also configured to deliver through maildir. The new server is already setup and all clients are successfully logging in, the problem is migrating their old conversations. I've tried moving the old message files directly and deleting the imap db that records which messages have already been fetched, but nothing has been successful. The outlook clients present an error for every message saying that the "message can no longer be located on the server." Keeping the files chronologically sorted is not an object, I just need to migrate the old conversations over. Is there a way to do this in a batch operation that will allow the clients to login to the new server and treat these old messages as though they were new? What is the protocol for this kind of migration?

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • How to transform a csv to combine matching rows?

    - by Christian Wolf
    I have a CSV file with some transaction data. Let's say date, volume, price and direction (sell/buy). Additionally there is a ID for each transaction and on each closing transaction (the newer one) there is a reference to the corresponding transaction. Classical database referencing. Now I want to do some statistics and draw some plots. This could be done via Octave, LaTeX/TikZ, Gnuplot or whatever. To do this I need both buy and sell price in one row. My thought was to preprocess the CSV to get another CSV containing the needed information and then to do the statistics. In the end I'd like to have a solution based on scripts and not on a spreadsheet as data might change often (exported from online DB). My actual solution (see http://paste.ubuntu.com/6262822/ ) is a bash script that parses the CSV line by line and checks if there exists a corresponding transaction. If found, a new row is written to the destination CSV. If not a warning is printed. The bad news: For each row in the source file I have to read the whole file a few times. This causes long running times of 10sec for 300 lines. As the line number might rise soon (10k lines), this is not perfect. I am aware, that there are many shells to be opened in the script which might cause the performance problems. Now my questions: Is bash/awk/sed/.... a good way to do things? Should I first import all data into a "real" local database to use SQL? Is there an easy way to achieve the desired results?

    Read the article

  • How to setup port forwarding from my Webserver (apache) to my Database server (mysql)

    - by karman888
    Hello again guys, and thank you for your help so far. Here is my problem: I have two remote dedicated servers, one webserver that runs apache, and one db server that runs mysql. The apache server is visible on the internet of course, but the second server is only visible to the apache server because they are connected with LAN. I need to connect to the remote mysql server through internet from my home-pc , but only apache server is visible to my home-pc. How can i setup port-forwarding from my apache server to the mysql server so i will be able to "see" the mysql server from my home-pc? This question is a follow-up from my first question Connect to remote mysql server from my application. Problem is that Mysql server is on LAN in which you answered me and helped me a lot by telling me to do "port-forwarding". I looked over the internet, and i cant find a good how-to to do port-forwarding. I'm an experienced programmer, but have little experience on hardware and networks. I can understand though what must be done, so i just need a litle help to sort things out :) I hope you can help me guys, Thank you in advance p.s. machine that Apache is running is on CentOS, mysql server also CentOS. p.s2 webserver runs WebHostManager i dont know if that makes any difference or it can be made easily through this, i just mention it :)

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >