Search Results

Search found 14784 results on 592 pages for 'mysql replication'.

Page 240/592 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • MySql: Is it reasonable to use 'view' or I would better denormalize my DB?

    - by Budda
    There is 'team_sector' table with following fields: Id, team_id, sect_id, size, level It contains few records for each 'team' entity (referenced with 'team_id' field). Each record represent sector of team's stadium (totally 8 sectors). Now it is necessary to implement few searches: by overall stadium size (SUM(size)); the best quality (SUM(level)/COUNT(*)). I could create query something like this: SELECT TS.team_id, SUM(TS.size) as OverallSize, SUM(TS.Level)/COUNT(TS.Id) AS QualityLevel FROM team_sector GROUP BY team_id ORDER BY OverallSize DESC / ORDER BY QualityLevel DESC But my concern here is that calculation for each team will be done each time on query performed. It is not too big overhead (at least now), but I would like to avoid performance issues later. I see 2 options here. The 1st one is to create 2 additional fields in 'team' table (for example) and store there OverallSize and QualityLevel fields. If information if 'sector' table is changed - update those table too (probably would be good to do that with triggers, as sector table doesn't change too often). The 2nd option is to create a view that will provide required data. The 2nd option seems much easier for me, but I don't have a lot of experience/knowledge of work with views. Q1: What is the best option from your perspective here and why? Probably you could suggest other options? Q2: Can I create view in such way that it will do calculations rarely (at least once per day)? If yes - how? Q3: Is it reasonable to use triggers for such purpose (1st option). P.S. MySql 5.1 is used, overall number of teams is around 1-2 thousand, overall number of records in sector table - overall 6-8 thousand. I understand, those numbers are pretty small, but I would like to implement the best practice here.

    Read the article

  • How to set the delay between update queries in Mysql so that every query get executed succesfully?

    - by OM The Eternity
    Hi All I have to the Update query mulitple times for different parameters using for loop in my PHP Script, now the problem Is Whenever I do it, only my last query get executed, all the previous queries seems to be skipped. the loop goes like this: foreach($cntrlflagset as $cntrlordoc => $cntrlorflag){ for($t=0;$t<count($cntrlorflag);$t++){ $userlvl = "controller"; $docflag = 1; $postfix = "created"; $createdoc->updatedocseeflags($cntrlorflag[$t],$docflag,$cntrlordoc,$postfix,$userlvl); $docflag = 2; $postfix = "midlvl"; $createdoc->updatedocseeflags($cntrlorflag[$t],$docflag,$cntrlordoc,$postfix,$userlvl); } } Here, the called function $createdoc-updatedocseeflags($cntrlorflag[$t],$docflag,$cntrlordoc,$postfix,$userlvl); Contains the Update query: $query = "UPDATE tbl_docuserstatus SET"; //if($flag != ""){ $query .= " docseeflag_".$postfix." = '".$flag."'"; //} $query .= " WHERE doc_id = '".$doc_id."' AND user_id = '".$user_id."'"; if($userlvl == "midlvl"){ $query .= " AND doc_midlvluser = '1'"; }elseif($userlvl == "finallvl"){ $query .= " AND doc_finallvluser = '1'"; }elseif($userlvl == "creator"){ $query .= " AND doc_creator = '1'"; }elseif($userlvl == "controller"){ $query .= " AND doc_controller = '1'"; }elseif($docarchive == 1){ $query .= " AND doc_controller = '1'"; } So could some one tell me, How to set the delay between update queries in Mysql so that every query get executed succesfully? Thanks In Advance

    Read the article

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • Best way to replicate / mirror 100s of databases in SQL 2005

    - by mrwayne
    Hi, I currently host around 400-500 SQL 2005 databases of varying sizes (1-10 gig) each. I am aware of most of the different methods available and the general pros/cons of mirroring, log shipping, replication and clustering, but i am not aware of how well they tend to perform when its employed at the size i have specified (400-500 unique databases). Does anyone have any good advice on what is likely the best method for having the ability to fail over to another server with this sort of setup? Fail over does not need to be immediate, i'm just looking for something better than taking backups every day and moving them to storage. I'm preferably looking for something that would also makes it easy to manage the databases in bulk (as opposed to one at a time). Thanks for your input!

    Read the article

  • Postfix "warning: cannot get RSA private key from file"

    - by phew
    I just followed this tutorial to set up a postfix mailserver with dovecot and mysql as backend for virtual users. Now I got the most parts working, I can connect to pop3 pop3s imap and imaps. Using echo TEST-MAIL | mail [email protected] works fine, when I log into my hotmail account it shows the email. It also works in reverse hence my MX entry for mydomain.com finally has been propagated, so I am being able to receive emails sent from [email protected] to [email protected] and view them in Thunderbird using STARTTLS via IMAP. Doing a bit more research after I got the error message "5.7.1 : Relay access denied" when trying to send mails to [email protected] using Thunderbird being logged into [email protected], I figured out that my server was acting as an "Open Mail Relay", which - ofcourse - is a bad thing. Digging more into the optional parts of the tutorial like shown workaround.org/comment/2536 and workaround.org/ispmail/squeeze/postfix-smtp-auth I decided to complete these steps aswell to be able to send mails via [email protected] through Mozilla Thunderbird, not getting the error message "5.7.1 : Relay access denied" anymore (as common mailservers reject open relayed emails). But now I ran into an error trying to get postfix working with SMTPS, in /var/log/mail.log it reads Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: cannot get RSA private key from file /etc/ssl/certs/postfix.pem: disabling TLS support Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: ANY PRIVATE KEY: Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib:ssl_rsa.c:669: That error is logged right after I try to send a mail from my newly installed mailserver using SMTP SSL/TLS via port 465 in Thunderbird. Thunderbird then tells me a timeout occured. Google has a few results concerning that problem, yet I couldn't get it working with any of those. I would link some of them here but as a new user I am only allowed to use two hyperlinks. My /etc/postfix/master.cf looks like smtp inet n - - - - smtpd smtps inet n - - - - smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes and nmap tells me PORT STATE SERVICE [...] 465/tcp open smtps [...] my /etc/postfix/main.cf looks like smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no #smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem #default postfix generated #smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key #default postfix generated smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem smptd_tls_key_file = /etc/ssl/private/postfix.pem smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smptd_sasl_auth_enable = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination myhostname = mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf virtual_transport = dovecot dovecot_destination_recipient_limit = 1 mailbox_command = /usr/lib/dovecot/deliver The *.pem files were created like described in the tutorial above, using Postfix To create a certificate to be used by Postfix use: openssl req -new -x509 -days 3650 -nodes -out /etc/ssl/certs/postfix.pem -keyout /etc/ssl/private/postfix.pem Do not forget to set the permissions on the private key so that no unauthorized people can read it: chmod o= /etc/ssl/private/postfix.pem You will have to tell Postfix where to find your certificate and private key because by default it will look for a dummy certificate file called "ssl-cert-snakeoil": postconf -e smtpd_tls_cert_file=/etc/ssl/certs/postfix.pem postconf -e smtpd_tls_key_file=/etc/ssl/private/postfix.pem I think I don't have to include /etc/dovecot/dovecot.conf here, as login via imaps and pop3s works fine according to the logs. Only problem is making postfix properly use the self-generated, self-signed certificates. Any help appreciated! EDIT: I just tried this different tutorial on generating a self-signed certificate for postfix, still getting the same error. I really don't know what else to test. I also did check for the SSL libraries, but all seems to be fine: root@domain:~# ldd /usr/sbin/postfix linux-vdso.so.1 => (0x00007fff91b25000) libpostfix-global.so.1 => /usr/lib/libpostfix-global.so.1 (0x00007f6f8313d000) libpostfix-util.so.1 => /usr/lib/libpostfix-util.so.1 (0x00007f6f82f07000) libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f6f82cb1000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007f6f82910000) libsasl2.so.2 => /usr/lib/libsasl2.so.2 (0x00007f6f826f7000) libdb-4.8.so => /usr/lib/libdb-4.8.so (0x00007f6f8237c000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007f6f82164000) libresolv.so.2 => /lib/libresolv.so.2 (0x00007f6f81f4e000) libc.so.6 => /lib/libc.so.6 (0x00007f6f81beb000) libdl.so.2 => /lib/libdl.so.2 (0x00007f6f819e7000) libz.so.1 => /usr/lib/libz.so.1 (0x00007f6f817d0000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007f6f815b3000) /lib64/ld-linux-x86-64.so.2 (0x00007f6f83581000) After following Ansgar Wiechers instructions its finally working. postconf -n contained the lines as it should. The certificate/key check via openssl did show that both files are valid. So it indeed has been a permissions problem! Didn't know that chown'ing the /etc/ssl/*/postfix.pem files to postfix:postfix is not enough for postfix to read the files.

    Read the article

  • sql server 2008 snapshot agent problem

    - by Dmitri
    Hi! I've just installed mssql 2008 sp1 x64 on windows 7 RTM, and have the problem with creating snapshots, whenever I try to launch the snapshot agent (i.e. to setup transactional replication publication), it throws the error that 'the file is missing'. I have looked into c:\program files\microsoft sql server\100\com and there are no executable files at all, like snapshot.exe! I tried a crazy move to copy all the files from my mssql 2005 com folder, without replacing ofcourse, and now it doesn't give an error, but says 'starting' all the time, but nothing happens. (Now I have removed those files again) I have all of the relevant features installed. So please help me figure out what to do now! Thanks! Dmitri.

    Read the article

  • Solaris to Linux conversion: Use VxFS or GFS?

    - by w00t
    We're a Solaris shop looking at RedHat Enterprise Linux and one of the things we're wondering is if we should keep Veritas Volume Manager + FileSystem or go with LVM+ext3 or RedHat's preferred cluster filesystem solution, GFS. One of the things we like about Veritas is that it can use Veritas Volume Replicator to have a remote copy of important filesystems. This functionality seems to be missing from RedHat, DRBD doesn't seem to be packaged in RHEL... So my questions are: Does anybody use VxFS/VxVM/VVR on Linux? Thoughts, experiences? Comparison with LVM+ext3? Anybody using GFS? Thoughts, experiences? Do you do remote replication for disaster recovery, and if so, how? Is there a standard RedHat way?

    Read the article

  • Replicating/Synchronizng multiple tables across diffrent Databases on the same instance

    - by Idan
    I have few tables that needed to be replicated/synchronized across several databases in our SQL Server 2008 cluster. I know it's possible to replicate between multiple instances, but I'm looking for replication or synchronization in the same instance between specific tables of databases. The replicaiton/synchronization should happen every half-hour or so, but I don't mind it happening constantly. I can't use DROP the target table and INSERT (copy) the source table since there are many constraints. Reason for this is to not manage in the application layer and write to 2 different databases at the same time. Example: DB1 has T1, T2 and T3 - these are constantly being updated by the application, APP1 running on DB1. DB2 needs to have an updated copy of T1 at all times, also, there is a different application, APP2 runs only on DB2. Both DB1 and DB2 are located on the same instance, INST1. Would it be possible to replicate T1, T2 and T3 from DB1 to DB2 ? Thanks, Idan.

    Read the article

  • PostgreSQL 9: Does Vacuuming a table on the primary replicate on the mirror?

    - by Scott Herbert
    Running PostgreSQL 9.0.1, with streaming replication keeping one read-only mirror instance up to date. Auto-vaccuum is on on the primary, except for a few tables which are not vacuumed by the auto-vacuum daemon, in an effort to reduce business-hour IO. These tables are "materialised views". Each night at midnight, we run a vacuum across the database in order to clean up those tables that are excluded from the auto-vacuum. I'm wondering if that process replicates across to the mirror, or if I need to set up vacuum on the mirror as well?

    Read the article

  • Implications of renaming sql server 2000 instance

    - by peg_leg
    I have a sql server 2000 of which I want to replicate the data to a sql server 2008. However the output from select @@servername and select serverproperty('servername') on the 2000 server are different and that prevents replication. There is a process to resolve this at http://support.microsoft.com/kb/818334 . Has anyone done this? What implications are there for following this process? Any 'gotchas' that need to be guarded against? My 2000 server is a lone production server...very scary. Please advise.

    Read the article

  • How to scale out OpenStreetMap data efficiently

    - by Pierre
    For over a year now, I'm running an in-house PostGIS server filled with OSM data, used for both Mapnik-based tile generation and Nominatim-based geocoding, updated with day replicates. This works pretty well. However, as usage is growing exponentially, I would like to achieve better reliability and performance by adding additional PostgreSQL servers. And I'm kind of lost. Since PostgreSQL doesn't seem to handle replication by itself, I would think about using a piede of middleware like PgPool-II to keep the servers in sync. But I'm afraid it would be nothing but necessary for this usage : very high read-to-write ratio, where all writes are done at the same exact time every day. My questions are simple : What would you do to keep these servers in sync? And, what is done for this at the OpenStreetMap Foundation, MapQuest, Mapbox or CloudMade? Thanks.

    Read the article

  • SQL-Server 2008 : Table Insert and Range Check ?

    - by LB .
    I'm using the Table Value constructor to insert a bunch of rows at a time. However if i'm using sql replication, I run into a range check constraint on the publisher on my id column managed automatically. The reason is the fact that the id range doesn't seem to be increased during an insert of several values, meaning that the max id is reached before the actual range expansion could occur (or the id threshold). It looks like this problem for which the solution is either running the merge agent or run the sp_adjustpublisheridentityrange stored procedure. I'm litteraly doing something like : INSERT INTO dbo.MyProducts (Name, ListPrice) VALUES ('Helmet', 25.50), ('Wheel', 30.00), ((SELECT Name FROM Production.Product WHERE ProductID = 720), (SELECT ListPrice FROM Production.Product WHERE ProductID = 720)); GO What are my options (if I don't want or can't adopt any of the proposed solution) ? Expand the range ? Decrease the threshold ? Can I programmatically modify my request to circumvent this problem ? thanks.

    Read the article

  • Show Slave Status not working from console, nor client.

    - by Mr. Leinad
    Hello, I have a somewhat strange case. Whenever one of my coworkers executes this line: show slave status; from their MySQL clients, it works smoothly. But if I do that, it says: ERROR 1227 (42000): Access denied; you need the SUPER,REPLICATION CLIENT privilege for this operation We are all going against the same database, and if I check privileges I can see: GRANT ALL PRIVILEGES ON . TO 'usermysql'@'%' IDENTIFIED BY PASSWORD 'password' There's something wrong with my computer.. but I can't pinpoint where it is.. Thanks EDIT: It's kinda bizarre.. it goes through a VPN remotely. But if I change the internet connection, then it works.. If the previous internet connection is restored, it doesn't.. Could we classify this among the great mysteries of the world? Or someone has an idea?

    Read the article

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • Git multi-master, is it possible?

    - by Fran
    Hi, Is it possible to set a dual master GIT repositories? I would like to set up two different servers which I could push and commit to and changes on any of them would be propagated to the other. I've googled for it, but the most similar solution I've found is Gerrit2, but it does only one way replication (master - master). Does anybody know if this is even possible to do with git? If so, could you please tell me which tools to use? Thanks in advance.

    Read the article

  • What would cause SQL 2008 Log Reader Agent to fail with "This process could not execute 'sp_replcmds' "?

    - by Rick
    I've seen this error message in other posts. They didn't seem to help resolving our issue. We are trying this with two SQL Server 2008 servers. I backed up my database from the source server and then restored it on our destination server. We setup basic Transaction Replication. The Snapshot Agent is working fine. The Log Reader Agent fails with the error above. Is it most likely a login issue for this job or QueryTimeout?

    Read the article

  • Which SQL Server edition?

    - by StaringSkyward
    We need a new install of windows server and sql server to replicate a couple of databases to a geographically separate location from an existing application (over a site-to-site VPN). The source database is SQL Server 2005. However, this is a temporary solution since the client is aiming to implement a different system entirely, so we are looking to find the minimum specification of both windows server and sql server to do this. We are finding the SQL server features per edition and licensing a little difficult to understand, hence the question. Am I correct in thinking that we can replicate data using transactional replication from SQL Server 2005 to 2008 web edition and we can install sql server web edition on windows 2008 web edition also? Thanks.

    Read the article

  • What is and what is not replicated in a glassfish cluster with a mod_jk load balancer?

    - by Navigateur
    I have a Glassfish (3.1.2) cluster over 2 computers as nodes, with a mod_jk load balancer. Are servlet instance variables replicated perfectly? If not, how do I make sure it is? Are all actions, including method calls and disk writes, replicated perfectly? If not, how do I make sure they are? These may seem like stupid questions, but I'm not seeking "load balancing" as much as I am seeking exact replication to enable future upgrading without any service interruption. How do I achieve this if it is not already the case?

    Read the article

  • Scaling web application with SQL Server 2008 database

    - by John
    I have a database which has 90% of read only tables. 10% of the tables has writable data. We need to scale the ASP.NET application.We need to add more users who will not be writing to the database. We are thinking of adding another server and routing the users who need read only access to that server. Is there a way to replicate just some tables to another database server. Since the 90% of data doesnt change, we don't want to setup any full database replication. Please advise.

    Read the article

  • Creating an email notification system based on polling database rows

    - by Ashish Sharma
    I have to design an email notification system based on the following requirements: The email notifications would be created based on polling rows in a Mysql 5.5 DB table when they are in a particular 'Completed' state. The email notification should be sent out in no more than 5 minutes from the time the row was created in the DB table (At the time of DB table row creation the state of the row might not be 'Completed'). Once 5 minutes for the DB table row expire in reaching the 'Completed' state, separate email notification need to be sent (basically telling the user that the original email notification would be delayed) and then sending the email notification as and when the row state reaches to being 'Completed'. The rest of the system requirements are : Adding relevant checks to monitor the whole system via MBeans interface. The system should be scalable so that if the rate of DB table rows creation increases so does the Email notification system be able to ramp up. So I request suggestions on following lines: What approach should I take in solving the problem described from a programming/Design pattern point of view? Suggestion for any third party plugin/software that can be used to solve the problem described? Points to take care regarding scalability and monitoring the health of the system? Java is the language of preference but I am open to using off the shelf components that can be interfaced with Java language or provide standard ports for communication. Currently I do have an in house grown system (written in Java) that is catering to the specified requirements, but it's now crumbling under increased load and now I want to give the problem a fresh look. thanks in advance Ashish

    Read the article

  • Apache VERY high page load time

    - by Aaron Waller
    My Drupal 6 site has been running smoothly for years but recently has experienced intermittent periods of extreme slowness (10-60 sec page loads). Several hours of slowness followed by hours of normal (4-6 sec) page loads. The page always loads with no error, just sometimes takes forever. My setup: Windows Server 2003 Apache/2.2.15 (Win32) Jrun/4.0 PHP 5 MySql 5.1 Drupal 6 Cold fusion 9 Vmware virtual environment DMZ behind a corporate firewall Traffic: 1-3 hits/sec avg Troubleshooting No applicable errors in apache error log No errors in drupal event log Drupal devel module shows 242 queries in 366.23 milliseconds,page execution time 2069.62 ms. (So it looks like queries and php scripts are not the problem) NO unusually high CPU, memory, or disk IO Cold fusion apps, and other static pages outside of drupal also load slow webpagetest.org test shows very high time-to-first-byte The problem seems to be with Apache responding to requests, but previously I've only seen this behavior under 100% cpu load. Judging solely by resource monitoring, it looks as though very little is going on. Here is the kicker - roughly half of the site's access comes from our LAN, but if I disable the firewall rule and block access from outside of our network, internal (LAN) access (1000+ devices) is speedy. But as soon as outside access is restored the site is crippled. Apache config? Crawlers/bots? Attackers? I'm at the end of my rope, where should I be looking to determine where the problem lies?

    Read the article

  • Which tools you use for development in your company?…Please be exact [closed]

    - by predrag.music
    If you are a professional php/(my/postgre/?)sql/? developer and working in a professional team ... I would like to know which tools you use for development in your company. I do not care which tool is better or worse, but "which tools you use", if it is not a TOP SECRET :) For example, these are just some of the tools i/we use (first those used most (in general)): Pen, paper lots of cofee, cola ... let me think ... mmmm ... yeah more cofee :) All kinds of books (a lots of books) OS: Win / MacOS X Server: Hosted (CentOS )/ At work Mac OS X Dev server: XAMPP / MAMP / LAMP Editor: Notepad++ IDE: Netbeans / Zend Studio / Eclipse Version Control System: Mercurial / SVN FTP: Filezilla mostly / ... Passwords: KeePass js / ajax: jQuery / pure js / jQuery UI Framework:CI / Zend / pure php Database: MySQL / Other ORM: Framework layer db (Not an ORM I know but...) / Doctrine (2) / no ORM Debugging: Xdebug (PHP) / firebug (ajax/js/html/css/...) / framework profiler (stuff) / ... (x) Dreaming: About... Thinking: Not about chaos in ? direction .... n Anything else that comes to mind n+1 Zilion other stuff i know but i can't remember ... 8 some other stuff i (don't) remember i forgot, give up, delete, lost, said to myself never again, i haven't had time stuff, have on computer stuff but can't find or don't even know i have it on my computer at least 2-3 or more times, stuff I said to myself i'll check later and never checked again for all sort of "perfectly justified" reasons (time, memory, wife :), whatever,...), ... what is the reason i'm asking this?:) 8 and beyond looking forward to see a lot of answers ?

    Read the article

  • Adding a forum to an existing site

    - by Andrew Heath
    I've got a site with ~500 registered members, 300 of which are what you'd call "active". Site data is kept in a MySQL dbase. I'd like to add a myBB forum to the site, but this question applies to any forum really. What I very much want to avoid is requiring my users to register both on the site and on the forum because my userbase is not technically literate and this would confuse a lot of them. However the forum software has its own registration, login, cookie, and password management system which naturally are different from the site's mechanics. I envision the following possibilities: install myBB into the existing database and customize the login code to unify the two systems. This would probably mean changing the site's code to use the myBB system as that would likely be less painful to refactor and wouldn't hurt future myBB upgrade ability. install myBB into separate database and write a bridging script of some sort that auto-registers existing site users with the forum if they elect to participate. Also check new forum registrations against the site's username list to prevent newcomers from taking existing names. run them fully separate and force users to re-register (easiest for ME, but least desirable for them) I would like a suggested course of action from those who have trod this path before... Thank you.

    Read the article

  • Need database selection advise

    - by jacknad
    I know this is considered a bad question since there is no correct answer, but I need to decide on a database for embedded linux (DaVinci 368 based) hardware and I've never had to produce a design with a database before. Each record will probably contain less than 1000 images with associated alpha-numeric data and the mass storage will be some kind of flash drive. Only one user needs access to the data at a time. MySQL claims to be "The world's most popular open source database" but SQLite claims to be "the most widely deployed SQL database engine in the world." Perhaps there is another that is also the best in the world? Which is easiest to use for a database newbie? Should I just flip a coin? Does it really matter which one I pick? Do I even need to use a database software package or should I roll my own? I won't need bells and whistles like sorting, but I'll probably need to delete the oldest records to make room for new ones if the storage fills up.

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >