Search Results

Search found 81 results on 4 pages for 'dbd'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • How can I install Perl's DBI on Mac OS X so Apache can find it?

    - by Russell C.
    I'm trying to setup a Perl development environment on my Mac laptop and have been having a really hard time getting it working. I thought I had everything configured correctly but when I try to run a sample script it is reporting errors with the DBI module and can't access the DB. Here is what is reported in the Apache error logs: [Fri Apr 30 23:11:33 2010] [error] [client 127.0.0.1] Can't locate DBI.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at main.pm line 5. I downloaded and installed both modules manually to work with MAMP using the following commands as specified in this forum post: For DBI 1. cd /Library/Perl/DBI-1.611 2. sudo Perl Makefile.PL 3. sudo make 4. sudo make install For DBD 1. cd /Library/Perl/DBD-mysql-4.014 2. sudo Perl Makefile.PL --mysql_config=/Applications/MAMP/Library/bin/mysql_config 3. sudo make 4. sudo make install What I noticed while running the above commands is that the files seems to be getting installed in the '/opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/' directory which doesn't seem to be one of the search directories that Apache mentions in the error at the beginning of this post. Here is what I'm seeing during the install: $ sudo make install Files found in blib/arch: installing files in blib/lib into architecture dependent library tree Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/DBI.bundle Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/dbipport.h Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/DBIXS.h Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/dbixs_rev.h Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/Driver.xst Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/Driver_xst.h Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/TASKS.pod Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBD/DBM.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBD/File.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBD/Gofer.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI/Changes.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI/DBD.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI/Profile.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI/ProxyServer.pm Installing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/DBI/PurePerl.pm Installing /opt/local/share/man/man3/DBD::DBM.3pm Installing /opt/local/share/man/man3/DBD::File.3pm Installing /opt/local/share/man/man3/DBD::Gofer.3pm Installing /opt/local/share/man/man3/DBI.3pm Installing /opt/local/share/man/man3/DBI::DBD.3pm Installing /opt/local/share/man/man3/DBI::Profile.3pm Installing /opt/local/share/man/man3/DBI::ProxyServer.3pm Installing /opt/local/share/man/man3/DBI::PurePerl.3pm Installing /opt/local/share/man/man3/TASKS.3pm Installing /opt/local/bin/dbiprof Installing /opt/local/bin/dbiproxy Writing /opt/local/lib/perl5/site_perl/5.8.9/darwin-2level/auto/DBI/.packlist Appending installation info to /opt/local/lib/perl5/5.8.9/darwin-2level/perllocal.pod My question is, what am I doing wrong and how can I either 1) Get Apache to look in the right directory where the DBD & DBI modules are installed or 2) Update the way I'm installing the module to install them into one of the search directories. I honestly don't know what option makes more sense and could use guidance on that as well. As you can probably tell I'm pretty lost at the moment. Please help!!! Thanks in advance.

    Read the article

  • Why do I get "Error 6060" when I try to use DBD::Advantage with a 64-bit perl on Linux?

    - by WarheadsSE
    I realize that I am attempting to go beyond the "supported" behavior of the manf's released drivers for Perl, after all they have only released it in package with x86 .so's. However, since I cannot use their package with x64 Perl on a RHEL 5.4 x86_64 box, and maintaining a seperate install of x86 Perl just for this one package, I have made an attempt to get this puppy working thanks to released 64-bit .so's that accompany other driver packages for Advantage. What I have done to this point: download beta 10 DBI drivers, in 32 download beta 10 PHP extension (it contains 32 and x86_64) copy the required DLLs into the ads-lib location (eg /usr/local/ads/lib64) compile the Perl DBI driver with the path to the lib64's .so's Good compilation, good install, good use. The problem is that I always get : failed: [iAnywhere Solutions][Advantage SQL][ASA] Error 6060: Advantage Database Server not available on specified server. axServerConnect (SQL-HY000)(DBD: db_login/SQLConnect err=-1) Does anyone have any ideas? EDIT: fixed package name in post title EDIT: Updated title. It appears that it's not just the x64 perl, but the RHEL 5.4 underneath that may be interfering. As commented below, I managed to shoe-horn a x86 perl onto the system, and compile the DBD::Advantage 9.99, and later replacing that with 9.10, and none of these x86 would connect either. Neither library (9.99 or 9.10) in either bit-edness will connect from this x86_64 server to the windows server's UNC path. I have successfully mounted this share without problems, but still I cannot seem to connect to the 9.1. I have tried: \hostname\PATH \FQDN\PATH \IP\PATH and all of these variations with the port (default) 6262 included. My windows machine connects fine, with both 9.1 and 9.99 from strawberry perl.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • How do I insert null fields with Perl's DBD::Pg?

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • How to generate the right password format for Apache2 authentication in use with DBD and MySQL 5.1?

    - by Walkman
    I want to authenticate users for a folder from a MySQL 5.1 database with AuthType Basic. The passwords are stored in plain text (they are not really passwords, so doesn't matter). The password format for apache however only allows for SHA1, MD5 on Linux systems as described here. How could I generate the right format with an SQL query ? Seems like apache format is a binary format with a lenght of 20, but the mysql SHA1 function return 40 long. My SQL query is something like this: SELECT CONCAT('{SHA}', BASE64_ENCODE(SHA1(access_key))) FROM user_access_keys INNER JOIN users ON user_access_keys.user_id = users.id WHERE name = %s where base64_encode is a stored function (Mysql 5.1 doesn't have TO_BASE64 yet). This query returns a 61 byte BLOB which is not the same format that apache uses. How could I generate the same format ? You can suggest other method for this too. The point is that I want to authenticate users from a MySQL5.1 database using plain text as password.

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Mac OS X 10.6 Setup for Apache/MySQL/Perl

    - by Russell C.
    I just got a new Mac and have been trying to setup a local development environment for my perl applications for a few days now with no luck. I'm getting no where fast so I hope someone else who has done this successfully could help. I started by installing MAMP which I thought would take care of everything for me but unfortunately it doesn't take care of some important perl modules. I used CPAN to install all our required modules except that it seems DBD::mysql doesn't install correctly through CPAN. After reading a lot online, lots of people reported problems with this and recommended using MacPorts to install the module which I have tried doing with no luck using the following command: sudo port install p5-dbd-mysql After what seems like a successful install of DBD::mysql, Apache continues to report the following error when trying to run any of our Perl scripts: [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /Library/Perl/Updates/5.10.0/darwin-thread-multi-2level /Library/Perl/Updates/5.10.0 /System/Library/Perl/5.10.0/darwin-thread-multi-2level /System/Library/Perl/5.10.0 /Library/Perl/5.10.0/darwin-thread-multi-2level /Library/Perl/5.10.0 /Network/Library/Perl/5.10.0/darwin-thread-multi-2level /Network/Library/Perl/5.10.0 /Network/Library/Perl /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level /System/Library/Perl/Extras/5.10.0 .) at (eval 1835) line 3. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Perhaps the DBD::mysql perl module hasn't been fully installed, [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] or perhaps the capitalisation of 'mysql' isn't right. [Fri Apr 30 18:51:07 2010] [error] [client 127.0.0.1] Available drivers: DBM, ExampleP, File, Gofer, Proxy, SQLite, Sponge. I'm not sure where to go from here but my Mac isn't much of a development environment if Perl isn't able to talk to the database. I'd really appreciate any help and advice you might be able to provide in getting my system setup successfully. Thanks in advance!

    Read the article

  • Error while installing Xtrabackup

    - by Olin
    I'd like to install XtraBackup (rpm -i percona-xtrabackup-2.1.9-744.rhel6.x86_64.rpm). During the rpm install it told me that it misses a dependency. error: Failed dependencies: perl(DBD::mysql) is needed by percona-xtrabackup-2.1.9-744.rhel6.x86_64 perl(Time::HiRes) is needed by percona-xtrabackup-2.1.9-744.rhel6.x86_64 Then I run yum install perl-Time-HiRes, and yum install perl-DBD-MySQL. For install perl-TImes-Hires has successful but not for perl-DBD-MySQL. Error: file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-libs-5.1.73-3.el6_5.x86_64 conflicts with file from package MySQL-server-5.6.10-1.el6.x86_64. I also had try to install : yum install cpan cpan DBI cpan DBD::mysql But still get the same error. So I hope someone can explain me what the right fix is, to get XtraBackup running on MySQL.

    Read the article

  • In DBD::CSV what does a /r in the f_ext attribute mean?

    - by sid_com
    Why does only the second example append the extension to the filename and what is the "/r" in ".csv/r" for. #!/usr/bin/env perl use warnings; use strict; use 5.012; use DBI; my $dbh = DBI->connect( "DBI:CSV:f_dir=/home/mm", { RaiseError => 1, f_ext => ".csv/r"} ); my $table = 'new_1'; $dbh->do( "DROP TABLE IF EXISTS $table" ); $dbh->do( "CREATE TABLE $table ( id INT, name CHAR, city CHAR )" ); my $sth_new = $dbh->prepare( "INSERT INTO $table( id, name, city ) VALUES ( ?, ?, ?, )" ); $sth_new->execute( 1, 'Smith', 'Greenville' ); $dbh->disconnect(); # -------------------------------------------------------- $dbh = DBI->connect( "DBI:CSV:f_dir=/home/mm", { RaiseError => 1 } ); $dbh->{f_ext} = ".csv/r"; $table = 'new_2'; $dbh->do( "DROP TABLE IF EXISTS $table" ); $dbh->do( "CREATE TABLE $table ( id INT, name CHAR, city CHAR )" ); $sth_new = $dbh->prepare( "INSERT INTO $table( id, name, city ) VALUES ( ?, ?, ?, )" ); $sth_new->execute( 1, 'Smith', 'Greenville' ); $dbh->disconnect();

    Read the article

  • Perl module error on solaris-10

    - by ramesh.mimit
    I have installed perl and pm_dbdmysql perl module on solaris-10. I have a perl script which makes the mysql DB connection to a diff server and runs some queries and returns the results. Its working fine on linux(redhat) but when I am running the script on solaris-10 its giving me the below error: 2010-12-14 00:00:00 and 2010-12-14 23:59:59DAILY INSIDE : 2010-12-14 00:00:00 -- 2010-12-14 23:59:59 install_driver(mysql) failed: Can't locate DBD/mysql.pm in @INC (@INC contains: /usr/local/lib/perl5/5.10.1/i86pc-solaris /usr/local/lib/perl5/5.10.1 /usr/local/lib/perl5/site_perl/5.10.1/i86pc-solaris /usr/local/lib/perl5/site_perl/5.10.1 .) at (eval 15) line 3. Perhaps the DBD::mysql perl module hasn't been fully installed, or perhaps the capitalisation of 'mysql' isn't right. Available drivers: DBM, ExampleP, File, Gofer, Multiplex, Proxy, Sponge, Sybase. at cerberus_report.pl line 114 Though dbd-mysql perl module is already installed. PKGINST: CSWpmdbdmysql NAME: pm_dbdmysql - MySQL driver for the Perl5 Database Interface (DBI) Is it something related to the path variables to need some other perl moudule dependency!

    Read the article

  • BugZilla XML RPC Interface

    - by Damo
    I am attempting to setup BugZilla to receive but reports from another system using the XML-RPC interface. BugZilla works fine on its own with its own interface. When I attempt to test the XML-RPC functionality by accessing "xmlrpc.cgi" in my browser I get the error: The XML-RPC Interface feature is not available in this Bugzilla at C:\BugZilla\xmlrpc.cgi line 27 main::BEGIN(...) called at C:\BugZilla\xmlrpc.cgi line 29 eval {...} called at C:\BugZilla\xmlrpc.cgi line 29 Following this I install test-taint package from the default perl repository, this installs version 1.04. Re-running "xmlrpc.cgi" gives me an IIS error: 502 - Web server received an invalid response while acting as a gateway or proxy server. There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server. So I run the checksetup.pl which inform me that: Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Installing Test-Taint from CPAN is the same. I assume XML-RPC is relent on Test-Taint, but Test-Taint doesn't seem to run correctly. If I ignore this error and attempt to invoke "bz_webservice_demo.pl" to add an entry the script times out. How can I get the XML-RPC / Test-Taint function working ? Current Setup: IIS7.5 on Windows Server 2008 Bugzilla 4.2.2 Perl 5.14.2 C:\BugZilla>perl checksetup.pl Set up gcc environment - 3.4.5 (mingw-vista special r3) * This is Bugzilla 4.2.2 on perl 5.14.2 * Running on Win2008 Build 6002 (Service Pack 2) Checking perl modules... Checking for CGI.pm (v3.51) ok: found v3.59 Checking for Digest-SHA (any) ok: found v5.62 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.76 Checking for DateTime-TimeZone (v0.79) ok: found v1.48 Checking for DBI (v1.614) ok: found v1.622 Checking for Template-Toolkit (v2.22) ok: found v2.24 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.904) ok: found v1.911 Checking for URI (v1.37) ok: found v1.59 Checking for List-MoreUtils (v0.22) ok: found v0.33 Checking for Math-Random-ISAAC (v1.0.1) ok: found v1.004 Checking for Win32 (v0.35) ok: found v0.44 Checking for Win32-API (v0.55) ok: found v0.64 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.18.1 Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for DBD-SQLite (v1.29) ok: found v1.33 Checking for DBD-Oracle (v1.19) ok: found v1.30 The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.46 Checking for Chart (v2.1) ok: found v2.4.5 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for MIME-tools (v5.406) ok: found v5.503 Checking for libwww-perl (any) ok: found v6.02 Checking for XML-Twig (any) ok: found v3.41 Checking for PatchReader (v0.9.6) ok: found v0.9.6 Checking for perl-ldap (any) ok: found v0.44 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.20 Checking for SOAP-Lite (v0.712) ok: found v0.715 Checking for JSON-RPC (any) ok: found v0.96 Checking for JSON-XS (v2.0) ok: found v2.32 Use of uninitialized value in open at C:/Perl/site/lib/Test/Taint.pm line 334, <DATA> line 558. Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.67) ok: found v3.68 Checking for HTML-Scrubber (any) ok: found v0.09 Checking for Encode (v2.21) ok: found v2.44 Checking for Encode-Detect (any) not found Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found Checking for Apache-SizeLimit (v0.96) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * Encode-Detect * Automatic charset detection for text attachments * * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * * Apache-SizeLimit * mod_perl * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: Encode-Detect: ppm install Encode-Detect TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Apache-SizeLimit: ppm install Apache-SizeLimit Reading ./localconfig... OPTIONAL NOTE: If you want to be able to use the 'difference between two patches' feature of Bugzilla (which requires the PatchReader Perl module as well), you should install patchutils from: http://cyberelk.net/tim/patchutils/ Checking for DBD-mysql (v4.001) ok: found v4.021 Checking for MySQL (v5.0.15) ok: found v5.5.27 WARNING: You need to set the max_allowed_packet parameter in your MySQL configuration to at least 3276750. Currently it is set to 3275776. You can set this parameter in the [mysqld] section of your MySQL configuration file. Removing existing compiled templates... Precompiling templates...done. checksetup.pl complete.

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • Run sql script from Ruby

    - by Chaker Nakhli
    Hi all, Using DBI::DatabaseHandle#execute or DBI::DatabaseHandle#prepare it's not possible to run an sql script (with mutiple sql statments). It fails with the following error : ERROR: cannot insert multiple commands into a prepared statement I tried to use the "unprepared" way using DBI::DatabaseHandle#do (the doc says it "goes straight to the DBD‘s implementation") but it keeps throwing the same error. code snippet: require 'dbd/pg' require 'dbi' DBI.connect("dbi:pg:database=dbname", db_user, db_password, db_params) do |dbh| schema = IO::read(schema_file) dbh.do(schema) end I'm using ruby 1.8.6 (2007-09-24 patchlevel 111) [i386-mswin32] dbi-0.4.3 dbd-pg-0.3.9 pg-0.9.0-x86-mswin32 Thank you!

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records? Additional information: The query works fine from tsql. I only get this error through Perl's DBD::Sybase interface. (Should I test through something else? I don't have the expertise yet to install PHP or Python. I've got jTDS and can use it, but I think that's a completely different implementation, not an interface to FreeTDS.) Adding client charset = UTF-8 to my freetds.conf file results in "Out of memory!" printed to STDERR.

    Read the article

  • Apache and mod_authn_dbd DBDPrepareSQL error

    - by David Brown
    I am running Apache 2.2.17 on Suse Linux 11. I have installed the mod_authn_dbd module to allow authentication using a MySQL database. Simple digest authentication works absolutely fine using the following directive: AuthDBDUserRealmQuery "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" However, I would like to both generalize this statement and prepare it for performance and security reasons. Thus, I used the following directives instead: DBDPrepareSQL "SELECT loginPassword FROM login WHERE loginUser = %s and loginRealm = %s" digestLogin AuthDBDUserRealmQuery digestLogin These directives generate the following errors: [...] [error] (20014)Internal error: DBD: failed to prepare SQL statements: [...] [error] (20014)Internal error: DBD: failed to initialise Why does my actual SQL statement work when used directly, but not when I try to prepare it? (Note I have tried using '?' and '%%' in place of '%', to no effect.)

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Perl TDS character sets

    - by skiphoppy
    I'm using the FreeTDS driver with DBD::Sybase, connecting to an MS SQL Server. When I query certain values of certain records, I get this error: DBD::Sybase::st fetchrow_arrayref failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (9) NUMBER = (99) Server , database Message String: WARNING! Some character(s) could not be converted into client's character set. Unconverted bytes were changed to question marks ('?'). This seems to happen for records that contain special Windows character-set characters, such as curly quotes, copied and pasted from people's Outlook and Word messages. Unfortunately, I do not have any control of this database; sanitizing the input on the way in is obviously the way to go, but is not available to me. What FreeTDS settings do I need to change to be able to successfully query these records?

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Dependency issue installing PostGIS on CentOs 6.3

    - by Nyxynyx
    I am new to linux and is trying to install PostGIS2 after successfully installing PostgreSQL 9.1. The machine is running CentOS 6.3 and has cPanel installed. Problem: When I tried installing PostGIS using yum: yum install postgis2_91 postgis2_91-utils, I get the dependency error below. How should I solve this dependency problem and install PostGIS? Thank you so much! --> Finished Dependency Resolution Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapserver.so.7 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdap.so.11 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libgeotiff.so.1.2 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libnetcdf.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libdapclient.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libhdf5.so.6 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: librx.so.0 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libogdi.so.3 Error: Package: gdal-1.7.2-1.el6.i686 (pgdg91) Requires: libcfitsio.so.0 You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6') Update An error still remains: Error: Package: postgis2_91-utils-2.0.1-1.rhel6.i686 (pgdg91) Requires: perl-DBD-Pg You could try using --skip-broken to work around the problem ** Found 6 pre-existing rpmdb problem(s), 'yum check' output follows: bandmin-1.6.1-5.noarch has missing requires of perl(bandmin.conf) bandmin-1.6.1-5.noarch has missing requires of perl(bmversion.pl) bandmin-1.6.1-5.noarch has missing requires of perl(services.conf) exim-4.77-1.i386 has missing requires of perl(SafeFile) frontpage-2002-SR1.2.i386 has missing requires of libexpat.so.0 sendmail-cf-8.14.4-8.el6.noarch has missing requires of sendmail = ('0', '8.14.4', '8.el6')

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • How to connect from ruby to MS Sql Server

    - by apetrov
    Hi Crowd! I'm trying to connect to the sql server 2005 database from *NIX machine: I have the following configuration: Linux 64bit ruby -v ruby 1.8.6 (2007-09-24 patchlevel 111) [x86_64-linux] important gems: dbd-odbc (0.2.4) dbi (0.4.1) active record sql server adapter - as plugin ruby-odbc 0.9996 (installed without any options.) unixODBC is installed freeTDS is installed cat /etc/odbcinst.ini [FreeTDS] Description = TDS driver (Sybase/MS SQL) Driver = /usr/lib/libtdsodbc.so Setup = /usr/lib/odbc/libtdsS.so CPTimeout = CPReuse = FileUsage = 1 DSN: DRIVER=FreeTDS;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" or DRIVER=/usr/lib/libtdsodbc.so;TDS_Version=8.0;SERVER=XXXX;DATABASE=XXX;Port=1433;uid=XXX;pwd=XXXX;" I receive the following error: >>ActiveRecord::Base.sqlserver_connection({"mode"=>"ODBC", "adapter"=>"sqlserver", "dsn"=>my_dns) DBI::DatabaseError: IM002 (0) [unixODBC][Driver Manager]Data source name not found, and no default driver specified from /usr/lib/ruby/1.8/DBD/ODBC/ODBC.rb:95:in `connect' from /usr/lib/ruby/1.8/dbi.rb:424:in `connect' from /usr/lib/ruby/1.8/dbi.rb:215:in `connect' from /opt/ublip/rails/current/vendor/plugins/activerecord-sqlserver-adapter/lib/active_record/connection_adapters/sqlserver_adapter.rb:47:in `sqlserver_connection' It looks like ODBC unable to find appropriate ODBC driver, but I have no ideas why. I had a problem with /usr/lib/libtdsodbc.so which is empty in default debian package free-tds dev, but i solved it with remove broken package and installation from sources. Will appreciate any thought! Thanks & Regards Note: I'm albe to connect using the same steps on mac 10.5

    Read the article

  • In mysql, is "explain ..." always safe?

    - by tye
    If I allow a group of users to submit "explain $whatever" to mysql (via Perl's DBI using DBD::mysql), is there anything that a user could put into $whatever that would make any database changes, leak non-trivial information, or even cause significant database load? If so, how? I know that via "explain $whatever" one can figure out what tables / columns exist (you have to guess names, though) and roughly how many records are in a table or how many records have a particular value for an indexed field. I don't expect one to be able to get any information about the contents of unindexed fields. DBD::mysql should not allow multiple statements so I don't expect it to be possible to run any query (just explain one query). Even subqueries should not be executed, just explained. But I'm not a mysql expert and there are surely features of mysql that I'm not even aware of. In trying to come up with a query plan, might the optimizer actual execute an expression in order to come up with the value that an indexed field is going to be compared against? explain select * from atable where class = somefunction(...) where atable.class is indexed and not unique and class='unused' would find no records but class='common' would find a million records. Might 'explain' evaluate somefunction(...)? And then could somefunction(...) be written such that it modifies data?

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • SQL SERVER – 3 Simple Puzzles – Need Your Suggestions

    - by pinaldave
    Last Month, I have posted three Simple Puzzles and I got very good response. I think there can be many interesting answers there. I would like to request all of you to take part the puzzles and provide your answer. I plant to consolidate answers and publish all the valid answers on this blog with due credit. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists I am also thinking that after such a long time we should have word Database Developer (DBD) just like Database Administrator (DBA) in our dictionary. I have also created pole where I talk about this subject. SQL SERVER – Are you a Database Administrator or a Database Developer? ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is the safe way to use fork with Apache::DBI under mod_perl2?

    - by codeholic
    I have a problem when I use Apache::DBI in child processes. The problem is that Apache::DBI provides a single handle for all processes which use it, so I get DBD::mysql::db selectall_arrayref failed: Commands out of sync; you can't run this command now at /usr/local/www/apache22/data/test-fork.cgi line 20. Reconnection doesn't help, since Apache::DBI reconnects in all processes, as I understood the following error The server encountered an internal error and was unable to complete your request. Error message: DBD driver has not implemented the AutoCommit attribute at /usr/local/lib/perl5/site_perl/5.8.9/Apache/DBI.pm line 283. , Here's the origin code: use Data::Dumper 'Dumper'; use DBI (); my $dbh = DBI->connect($dsn, $username, $password, { RaiseError => 1, PrintError => 0, }); my $file = "/tmp/test-fork.tmp"; my $pid = fork; defined $pid or die "fork: $!"; if ($pid) { my $rows = eval { $dbh->selectall_arrayref('SELECT SLEEP(1)') }; print "Content-Type: text/plain\n\n"; print $rows ? "parent: " . Dumper($rows) : $@; } else { my $rows = eval { $dbh->selectall_arrayref('SELECT SLEEP(1)') }; open FH, '>', $file or die "$file: $!"; print FH $rows ? "child: " . Dumper($rows) : $@; close FH; } The code I used for reconnection: ... else { $dbh->disconnect; $dbh = DBI->connect($dsn, $username, $password, $attrs); my $rows = eval { $dbh->selectall_arrayref('SELECT SLEEP(1)') }; open FH, '>', $file or die "$file: $!"; print FH $rows ? "child: " . Dumper($rows) : $@; close FH; } Is there a safe way to use Apache::DBI with forking? Is there a way to make it create a new connection perhaps?

    Read the article

< Previous Page | 1 2 3 4  | Next Page >