Search Results

Search found 4704 results on 189 pages for 'refactoring databases'.

Page 116/189 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Best practice for scaling a single application source to multiple nodes

    - by Andrew Waters
    I have an application which needs to scale horizontally to cover web and service nodes (at the moment they're all on one) but interact with the same set of databases and source files (both application code and custom assets). Database is no problem, it's handled already with replication in MongoDB. Also, the configuration of the servers are the same (100% linux). This question is literally about sharing a filesystem between machines so that its content is always correct, regardless of the node accessing it. My two thoughts have so far been NFS and SAN - SAN being prohibitively expensive and NFS seeing some performance issues on the second node with regards to glob()ing in PHP. Does anyone have recommended strategies or other techniques that don't involved sharding data across nodes or any potential gotchas in NFS that may cause slow disk seek times? To give you an idea of the scale, the main node initialises it's application modules in ~ 0.01 seconds. The secondary is taking ~2.2 seconds. They're VM's inside a local virtual network in ESXi and ping time between them is ~0.3ms

    Read the article

  • What would you write in a consitution (law book) for a programmers' country?

    - by Developer Art
    After we have got our great place to talk about professional matters and sozialize online - on SO, I believe the next logical step would be to found our own country! I invite you all to participate and bring together your available resources. We will buy an island, better even a group of island so that we could establish states like .NET territories, Java land, Linux republic etc. We will build a society of programmers (girls - we need you too). To the organizational side, we're going to need some constitution or a law book. I suggest we write it together. I make it a wiki as it should be a cooperative effort. I'll open the work. Section 1. Programmers' rights. Every citizen has a right to an Internet connection 24/7. Every citizen can freely choose the field of interest Section 2. Programmers' obligations. Every citizen must embrace the changing nature of the profession and constanly educate himself Section 3. Law enforcement. Code duplication when can be avoided is punished by limiting the bandwith speed to 64Kbit for a period of one week. Using ugly hacks instead of refactoring code is punished by cutting the Internet connection for a period of one month. Usage of technologies older than 5/10 years is punished by restricting the web access to the sites last updated 5/10 years ago for a period of one month. Please feel free to modify and extend the list. We'll need to have it ready before we proceed formally with the country foundation. A purchase fund will be established shortly. Everyone is invited to participate.

    Read the article

  • How do I refactor code into a subroutine but allow for early exit?

    - by deworde
    There's a really obvious refactoring opportunity in this (working) code. bool Translations::compatibleNICodes(const Rule& rule, const std::vector<std::string>& nicodes) { bool included = false; // Loop through the ni codes. for(std::vector<std::string> iter = nicodes.begin(); iter != nicodes.end(); ++iter) { // Match against the ni codes of the rule if(rule.get_ni1() == *iter) { // If there's a match, check if the rule is include or exclude const std::string flag = rule.get_op1(); // If include, code is included unless a later rule excludes it if(flag == "INCLUDE"){ included = true; } // If exclude, code is specifically excluded else if(flag == "EXCLUDE"){ return false; } } if(rule.get_ni2() == *iter) { const std::string flag = rule.get_op2(); if(flag == "INCLUDE"){ included = true; } else if(flag == "EXCLUDE"){ return false; } } if(rule.get_ni3() == *iter) { const std::string flag = rule.get_op3(); if(flag == "INCLUDE"){ included = true; } else if(flag == "EXCLUDE"){ return false; } } if(rule.get_ni4() == *iter) { const std::string flag = rule.get_op4(); if(flag == "INCLUDE"){ included = true; } else if(flag == "EXCLUDE"){ return false; } } if(rule.get_ni5() == *iter) { const std::string flag = rule.get_op5(); if(flag == "INCLUDE"){ included = true; } else if(flag == "EXCLUDE"){ return false; } } } return included; } The problem is that I can't get around the problem that I want to exit early if it's an exclude statement. Note that I can't change the structure of the Rule class. Any advice?

    Read the article

  • What are Code Smells? What is the best way to correct them?

    - by Rob Cooper
    OK, so I know what a code smell is, and the Wikipedia Article is pretty clear in its definition: In computer programming, code smell is any symptom in the source code of a computer program that indicates something may be wrong. It generally indicates that the code should be refactored or the overall design should be reexamined. The term appears to have been coined by Kent Beck on WardsWiki. Usage of the term increased after it was featured in Refactoring. Improving the Design of Existing Code. I know it also provides a list of common code smells. But I thought it would be great if we could get clear list of not only what code smells there are, but also how to correct them. Some Rules Now, this is going to be a little subjective in that there are differences to languages, programming style etc. So lets lay down some ground rules: ** ONE SMELL PER ANSWER PLEASE! & ADVISE ON HOW TO CORRECT! ** See this answer for a good display of what this thread should be! DO NOT downmod if a smell doesn't apply to your language or development methodology We are all different. DO NOT just quickly smash in as many as you can think of Think about the smells you want to list and get a good idea down on how to work around. DO downmod answers that just look rushed For example "dupe code - remove dupe code". Let's makes it useful (e.g. Duplicate Code - Refactor into separate methods or even classes, use these links for help on these common.. etc. etc.). DO upmod answers that you would add yourself If you wish to expand, then answer with your thoughts linking to the original answer (if it's detailed) or comment if its a minor point. DO format your answers! Help others to be able to read it, use code snippets, headings and markup to make key points stand out!

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • Should I go for Arrays or Objects in PHP in a CouchDB/Ajax app?

    - by karlthorwald
    I find myself converting between array and object all the time in PHP application that uses couchDB and Ajax. Of course I am also converting objects to JSON and back (for sometimes couchdb but mostly Ajax), but this is not so much disturbing my workflow. At the present I have php objects that are returned by the CouchDB modules I use and on the other hand I have the old habbit to return arrays like array("error"="not found","data"=$dataObj) from my functions. This leads to a mixed occurence of real php objects and nested arrays and I cast with (object) or (array) if necessary. The worst thing is that I know more or less by heart what a function returns, but not what type (array or object), so I often run into type errors. My plan is now to always cast arrays to objects before returning from a function. Of course this implies a lot of refactoring. Is this the right way to go? What about the conversion overhead? Other ideas or tips? Edit: Kenaniah's answer suggests I should go the other way, this would mean I'd cast everything to arrays. And for all the Ajax / JSON stuff and also for CouchDB I would use $myarray = json_decode($json_data,$assoc = true); //EDIT: changed to true, whcih is what I really meant Even more work to change all the CouchDB and Ajax functions but in the end I have better code.

    Read the article

  • Cannot Login To phpMyAdmin

    - by Zach Dziura
    I'm running a simple LAMP server at home from which I host a personal blog. The server is running Arch Linux, with the latest-and-greatest versions of Apache, MySQL, and PHP. In order to easily maintain the databases, I installed phpMyAdmin. However, I cannot login. If I were to SSH into the server and run mysql -u <user> -p <password>, no errors show up and I'm immediately placed into the MySQL prompt. No problem. However, when I try to log in with phpMyAdmin, using those exact same credentials, nothing happens. No errors, no nothing, I'm just redirected back to the login page. Did I do something wrong? Thanks in advance for any and all answers!

    Read the article

  • sa account gets locked out frequently

    - by bala
    I have an sql server 2008 installed. there are about 6 databases created and running on this server I often get message like "SA account is locked out" is there any specific reasons for this account being locked out? is there any other place I need to check the reasons ( I checked the eventviwer but could not get much information) Edit: i found this information in teh eventviewer backups. Is this the cause? SQL Server failed to communicate with filter daemon launch service (Windows error: The service cannot be started, either because it is disabled or because it has no enabled devices associated with it. ). Full-Text filter daemon process failed to start. Full-text search functionality will not be available.

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • Login authentication vanished from MongoDB install

    - by Robert Oschler
    A few months ago I enabled password protection on my MongoDB install. Today I ran the Mongo client and forgot to use my login details. Instead of rejecting nearly everything I try to do from the shell, like it should, I had complete access to all the databases and collections. Fortunately this instance is only running a few test apps, so I quickly shutdown the MongoD instance until I figure this out. Has anybody ever seen this kind of behavior before and knows what is going on? The MongoD instance is running on a Linux VM hosted by Azure. The only thing I can think of is that perhaps Azure restored an old copy of the VM, but I received no E-mails to that effect and everything else on the server seems to be proper, including new daemon processes that I added after I enabled password protection on MongoD.

    Read the article

  • Using sed to convert hex characters in postgresql dump file

    - by Bernt
    I am working on moving several databases from a Postgresql 8.3 server to a Postgresql 8.4 server. It has worked fine so far, but one base has given me some trouble. The database is listed as unicode-encoded in the 8.3-server, but somehow a client program has managed to inject some invalid unicode data into it. When I do a normal dump and restore using postgres' custom format, the new server won't accept it, complaining about unicode errors. My plan is to do a plain text dump of the database, then use sed to replace the invalid characters with nothing (they are not needed). But how do you make sed work on hex/binary values in a file?

    Read the article

  • Why is my DB read-only when attached to SQL Express, but not with SQL Web?

    - by David Rubin
    I have an .mdf/.ldf pair, originally created in 2008 R2 Standard, and well under 10GB, with ACLs: d:\db snapshot\DB_NAME.mdf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F d:\db snapshot\DB_NAME_log.ldf SERVERNAME\SQLServerMSSQLUser$ACCOUNT$MSSQLSERVER:F OWNER RIGHTS:F BUILTIN\Administrators:F When I attach the database to an instance of SQL Express 2008 R2, it comes up as read-only. When exactly the same acls and user-accounts and SQLCMD statements are set up with SQL Web 2008 R2, it comes up writable. I looked at MSDN's comparison page but nothing jumped out at me. Why on earth is this happening? Thanks! UPDATE I just noticed that the name of the attached databases are different. On SQL Express (read-only) it matches the filename (e.g. DB_NAME); on SQL Web (writable) it matches the CUSTOM_NAME that I gave it in the attach command: CREATE DATABASE [CUSTOM_NAME] ON (FILENAME = 'PATH_TO_MDF'), (FILENAME = 'PATH_TO_LDF') FOR ATTACH

    Read the article

  • Looking for a good Web Server that is cheap

    - by SoLoGHoST
    I am a Project Manager, and former Lead Developer for a software portal system that requires a forum software to run. I am in need of a server that is cheap, reliable, and supports the latest PHP (5.2+), MySQL, unlimited e-mails (preferably), a cPanel, multiple sub-domains (atleast 3+). Currently I am paying $34.95 USD/month (approx. $420 USD/year). This is too high for me to pay to keep the site running. I just recently became Project Manager and in charge of Finances and I'm extremely concerned for the future of Dream Portal. With those prices I'm not sure I'll be able to keep it running for too long. Can someone please tell me of a good server that meets all of the requirements that I listed above that is cheaper on a yearly basis? Note: Currently on a Dedicated Server with limited disk space at 15000 MB (15 GB), monthly bandwidth = 500000 MB, 50 emails limit, 20 sub-domains limit, 30 FTP accts., and 25 SQL Databases.

    Read the article

  • Backup and restore MySQL database without system access

    - by Sencerd
    Hi guys, I am trying to move a database from 1 provider to another, the problem is that I don't have system access at either end (ie, no ssh), so I cannot use a mysqldump. I have already tried using MySQL Administrator, the backup took about 45 minutes, but when it came to restoring it was moving at a snails pace, and estimating 12+ hours. This is a live app so I need to keep the downtime to an absolute minimum. The database consists of 35 tables, a mixture of MyISAM and InnoDB, the whole thing comes to about 4.4GB. The source and destination databases are both running on very powerful servers. Any suggestions on a quick way of doing this will be gratefully received. Thanks

    Read the article

  • Nginx proxy to Apache - resolve HTTP ORIGIN

    - by Fratyr
    I have a server setup with nginx serving static content and proxy all PHP/dynamic requests to apache on 127.0.0.1 I'm building an API for my databases, and I need to allow clients by their origin (domain name), rather than just IP. Based on CORS rules. So when I send an HTTP header header("Access-Control-Allow-Origin: www.client-requesting.myapi.com"); from my API server, I have to tell it which origin I allow, otherwise client side requests won't work to my API due to same-origin policy. The question is how can I know which domain name (if any) called my API? What should be the nginx and apache configuration to pass the origin parameter? I tried to google, and all I found is some possible solution with mod_rpaf, but I wanted to be sure. Thanks!

    Read the article

  • General High-Level Assessment

    - by tcarper
    Guys and Gals, I've been tasked with a doozy of an assignment. The objective is something akin to "laying of hands" on several database servers which work in concert to provide data to various Web, Client-Server and Tablet-Sync'd distributed Client-Server programs. More specifically, I've been asked to come up with a "Maintenance Plan" which includes recommendations for future work to improve these machines' performance/reliability/security/etc. Might there be some good articles on teh interwebs ya'll could point me towards which would give me some good basis to start? Articles describing "These are the top 4 overarching categories and this is how you should proceed when drilling down on each of them" sort-of-thing would be fabulous. The Databases are all SQL 2005, however the compatibility level is 80 and they were originally created with ERwin based on SQL 6.5. The OSs are all Windows Server 2003. Thanks all! Tim

    Read the article

  • MySQL Locking Up

    - by Ian
    I've got a innodb table that gets a lot of reads and almost no writes (like, 1 write for every 400,000 reads approx). I'm running into a pretty big problem though when I do INSERT into the table. MySQL completely locks up. It uses 100% cpu, and every single other table (in other databases even) have their statuses set to "Locked" until the INSERT is done. This is a big problem because MySQL stays locked up for up to 4 minutes. I'm using version 5.1.47 (rpm from mysql.com). Any ideas?

    Read the article

  • MySQL Server Not Starting on Boot

    - by Brian
    I have installed MySQL on a RHEL 5 server and I'm wanting to set it up so that the server starts on boot. I've ran the "chkconfig --list mysqld" command and it's currently running on levels 3, 4 and 5. However, when I reboot the server, no mysqld process is started. I've also tried manually starting the server by executing "/usr/bin/mysqld_safe" and I get the following output: Starting mysqld daemon with databases from /var/lib/mysql STOPPING server from pid file /var/run/mysqld/mysqld.pid 100319 10:31:30 mysqld ended I looked in /var/log/mysqld.log and I found the following: 100319 10:29:01 mysqld started 100319 10:29:02 InnoDB: Started; log sequence number 0 29752204 100319 10:29:02 [ERROR] Can't start server : Bind on unix socket: Permission denied 100319 10:29:02 [ERROR] Do you already have another mysqld server running on socket: /var/lib/mysql/mysql.sock ? 100319 10:29:02 [ERROR] Aborting

    Read the article

  • How to take mysql replication backup

    - by user53864
    I have a MySQL master-master replication setup with a slave for each master(only one master used for read/writes at a time) on Ubuntu server. Wondering what would be the best way to schedule backup of replication databases with mysqldump. I have following clarifications because of which could not proceed further. Scheduling mysqldump backup on masters safe for replication? Connecting masters with GUI applications(workbench) for database manipulations(read, writes.. by developers) is safe? Any inputs are welcome.

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • Equivalent of phpMyAdmin for MSSQL?

    - by Tedd Hansen
    Is there any webinterface for administrating MSSQL similar to phpMyAdmin (for MySQL)? I want a self-service setup where developers can create a database through webinterface and upload/download backups of the database without local access. I've considered phpMSAdmin, but it hasn't had a release since 2006 so I'm not sure its worth the effort of setting it up. If there is something else (free or not-so-free) that would be great. My question is similar to this one posted 2 years ago, but no good webinterface was found back then. SQL Web Data Administrator seems interesting, but it lacks a few features - most notably creating new databases (also, not updated since 2007).

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • MyController class must produce class according to the enum type.

    - by programmerist
    GenoTipController must produce class according to the enum type. i have 3 class: _Company,_Muayene,_Radyoloji. Also i have CompanyView Class GetPersonel method. if you look GenoTipController my codes need refactoring. Can you understand me? i need a class according to ewnum type must me produce class. For example; case DataModelType.Radyoloji it must return radyoloji= new Radyoloji . Everything must be one switch case? public class GenoTipController { public _Company GenerateCompany(DataModelType modeltype) { _Company company = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: company = new Company(); break; default: break; } return company; } public _Muayene GenerateMuayene(DataModelType modeltype) { _Muayene muayene = null; switch (modeltype) { case DataModelType.Radyoloji: break; case DataModelType.Satis: break; case DataModelType.Muayene: muayene = new Muayene(); break; case DataModelType.Company: break; default: break; } return muayene; } public _Radyoloji GenerateRadyoloji(DataModelType modeltype) { _Radyoloji radyoloji = null; switch (modeltype) { case DataModelType.Radyoloji: radyoloji = new Radyoloji(); break; case DataModelType.Satis: break; case DataModelType.Muayene: break; case DataModelType.Company: break; default: break; } return radyoloji; } } public class CompanyView { public static List GetPersonel() { GenoTipController controller = new GenoTipController(); _Company company = controller.GenerateCompany(DataModelType.Company); return company.GetPersonel(); } } public enum DataModelType { Radyoloji, Satis, Muayene, Company } }

    Read the article

  • How to completely remove MySQL from Windows?

    - by Nathan Long
    (I realize there are similarly-titled questions, but this one is really 'how do I reset the password', and this one doesn't specify an OS and has only gotten Linux-oriented answers.) On Windows XP, I just uninstalled MySQL and deleted the folder that it was installed in. I then rebooted and reinstalled MySQL. When it comes back up, it still has the databases it had before the uninstall. Where did it keep that database info? How do I completely remove MySQL and start over, as if I'd never installed it?

    Read the article

  • I'll be setting up a dedicated web server at work soon, my first non hobby server - What should I know?

    - by Rogue Coder
    I've been running my own dedicated server running CentOS and a LAMP stack for 2-3 years now, but it's only been hosting my own websites which aren't super important. However, I will soon be setting up a Linux Webserver and Linux Database Server at work, and I'm wondering what are some important things I should be doing. It's an internal server only, so only people in the company can access it. Should I get a slave server for both of my servers for backups? If I do this, how many backups should I be keeping and how often should those backups be done? Right now on my current server I run a cron job nightly to backup my MySQL databases (Usually 40mb files once compressed), and bi-weekly cron jobs to backup my web root. I just store these files on my local computer via FTP. Also, for an internal server like this, should I look at using LightHTTPD or NginX to increase performance, or will Apache be fine?

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >