Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 837/1277 | < Previous Page | 833 834 835 836 837 838 839 840 841 842 843 844  | Next Page >

  • Delphi: how to create Firebird database programmatically

    - by Brad
    I'm using D2K9, Zeos 7Alpha, and Firebird 2.1 I had this working before I added the autoinc field. Although I'm not sure I was doing it 100% correctly. I don' know what order to do the SQL code, with the triggers, Generators, etc.. I've tried several combinations, I'm guessing I'm doing something wrong other than just that for this not to work. SQL File From IB Expert : /********************************************/ /* Generated by IBExpert 5/4/2010 3:59:48 PM / /*********************************************/ /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; /********************************************/ /* Tables / /*********************************************/ CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; CREATE TABLE EMAIL_ACCOUNTS ( ID INTEGER NOT NULL, FNAME VARCHAR(35), LNAME VARCHAR(35), ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20), BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20), EMAIL VARCHAR(255), CREATEDDATE DATE, "ACTIVE" BOOLEAN DEFAULT 0 NOT NULL /* BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, BANNED BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) /, "PUBLIC" BOOLEAN DEFAULT 0 NOT NULL / BOOLEAN = SMALLINT CHECK (value is null or value in (0, 1)) */, NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024 ); /********************************************/ /* Primary Keys / /*********************************************/ ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID); /********************************************/ /* Triggers / /*********************************************/ SET TERM ^ ; /********************************************/ /* Triggers for tables / /*********************************************/ /* Trigger: EMAIL_ACCOUNTS_BI */ CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ /********************************************/ /* Privileges / /*********************************************/ Triggers: /********************************************/ /* Following SET SQL DIALECT is just for the Database Comparer / /*********************************************/ SET SQL DIALECT 3; CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET TERM ^ ; CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS ACTIVE BEFORE INSERT POSITION 0 AS BEGIN IF (NEW.ID IS NULL) THEN NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1); END ^ SET TERM ; ^ Generators: CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID; ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 2; /* Old syntax is: CREATE GENERATOR GEN_EMAIL_ACCOUNTS_ID; SET GENERATOR GEN_EMAIL_ACCOUNTS_ID TO 2; */ My Code: procedure TForm2.New1Click(Sender: TObject); var query:string; begin if JvOpenDialog1.Execute then begin ZConnection1.Disconnect; ZConnection1.Database:= jvOpenDialog1.FileName; if not FileExists(ZConnection1.database) then begin ZConnection1.Properties.Add('createnewdatabase=create database '''+ZConnection1.Database+''' user ''sysdba'' password ''masterkey'' page_size 4096 default character set iso8859_2;'); try ZConnection1.Connect; except ShowMessage('Error Connection To Database File'); application.Terminate; end; end else begin ShowMessage('Database File Already Exists.'); exit; end; end; query := 'CREATE DOMAIN BOOLEAN AS SMALLINT CHECK (value is null or value in (0, 1))'; Zconnection1.ExecuteDirect(query); query:='CREATE TABLE EMAIL_ACCOUNTS (ID INTEGER NOT NULL,FNAME VARCHAR(35),LNAME VARCHAR(35),'+ 'ADDRESS VARCHAR(100), CITY VARCHAR(35), STATE VARCHAR(35), ZIPCODE VARCHAR(20),' + 'BDAY DATE, PHONE VARCHAR(20), UNAME VARCHAR(255), PASS VARCHAR(20),' + 'EMAIL VARCHAR(255),CREATEDDATE DATE , '+ '"ACTIVE" BOOLEAN DEFAULT 0 NOT NULL,'+ 'BANNED BOOLEAN DEFAULT 0 NOT NULL,'+ '"PUBLIC" BOOLEAN DEFAULT 0 NOT NULL,' + 'NOTES BLOB SUB_TYPE 0 SEGMENT SIZE 1024)'; //ZConnection.ExecuteDirect('CREATE TABLE NOTES (noteTitle TEXT PRIMARY KEY,noteDate DATE,noteNote TEXT)'); Zconnection1.ExecuteDirect(query); { } query := 'CREATE SEQUENCE GEN_EMAIL_ACCOUNTS_ID;'+ 'ALTER SEQUENCE GEN_EMAIL_ACCOUNTS_ID RESTART WITH 1'; Zconnection1.ExecuteDirect(query); query := 'ALTER TABLE EMAIL_ACCOUNTS ADD PRIMARY KEY (ID)'; Zconnection1.ExecuteDirect(query); query := 'SET TERM ^'; Zconnection1.ExecuteDirect(query); query := 'CREATE OR ALTER TRIGGER EMAIL_ACCOUNTS_BI FOR EMAIL_ACCOUNTS'+ 'ACTIVE BEFORE INSERT POSITION 0'+ 'AS'+ 'BEGIN'+ 'IF (NEW.ID IS NULL) THEN'+ 'NEW.ID = GEN_ID(GEN_EMAIL_ACCOUNTS_ID,1);'+ 'END'+ '^'+ 'SET TERM ; ^'; Zconnection1.ExecuteDirect(query); ZTable1.Active:=true; end;

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • Modifying File while in use using Java

    - by Marquinio
    Hi all, I have this recurrent Java JAR program tasks that tries to modify a file every 60seconds. Problem is that if user is viewing the file than Java program will not be able to modify the file. I get the typical IOException. Anyone knows if there is a way in Java to modify a file currently in use? Or anyone knows what would be the best way to solve this problem? I was thinking of using the File canRead(), canWrite() methods to check if file is in use. If file is in use then I'm thinking of making a backup copy of data that could not be written. Then after 60 seconds add some logic to check if backup file is empty or not. If backup file is not empty then add its contents to main file. If empty then just add new data to main file. Of course, the first thing I will always do is check if file is in use. Thanks for all your ideas.

    Read the article

  • Cannot reach reach jQuery (in parent document ) from IFRAME

    - by Michael Joyner
    I have written a backup program for SugarCRM. My program sets a iframe to src=BACKUP.PHP My backup program sends updates to parent window with: echo "<script type='text/javascript'>parent.document.getElementById('file_size').value='".fileSize2human(filesize($_SESSION['archive_file_name']))."';parent.document.getElementById('file_count').value=".$_SESSION['archive_file_count'].";parent.document.getElementById('description').innerHTML += '".$log_entry."\\r\\n';parent.document.getElementById('description').scrollTop = parent.document.getElementById('description').scrollHeight;</script>"; echo str_repeat( ' ', 4096); flush(); ob_flush(); I have added a JQUERY UI PROGRESS BAR and I need to know how I update the progress bar on the parent window. I tried this: $percent_complete = $_SESSION['archive_file_count'] / $_SESSION['archive_total_files']; echo "<script type='text/javascript'>parent.document.jquery('#progressbar').animate_progressbar($percent_complete); </script>"; ......... and get this error in browser. Uncaught TypeError: Object [object HTMLDocument] has no method 'jquery' HOW CAN I UPDATE THE PROGRESS BAR IN PARENT DOCUMENT FROM THE IFRAME?

    Read the article

  • Soapui & populating database before

    - by mada
    SoapUi & database data needed Hi, In a J2ee project( spring ws + hibernate +postgresql+maven 2 + tomcat),We have created webservices with full crud operations. We are currently testing them with SoapUi (not yet integrated in the integration-test maven phase ). We need tomcat to test them. Actually before the phase test, with a maven plugin, only one time, some datas are injected in the database before the launch of all tests through many sql files. But the fact is that to test some webservices i need some data entry & their Id (primary key) A- What is the best way to create them ? 1- populate one of the previous .sql test file. But now we have a big dependency where i have to be careful to use the same id use in a insert in the sql file in SoapUi too. And if any developper drop the value, the test will broke. 2- create amongst the .sql file test a soapui.sql where ALL data needed for the Sopaui test will be concentrated. The advantage is that the maintenance will be much more easy. 3-because i have Ws available for all crud operations, i can create before any testsuite a previous testsuite called setup-testsuiteX. In this one, i will use those Ws to inject datas in the databse & save id in variables thanks to the feature "property transfer" 4-launch the soapui test xml file with a junit java test & in the Setup Method (we are using spring test), we will populate the database with DbUnit. Drawback: they are already data in the database & some conflicts may appear due to constraint.And now i have a dependency between: - dbUnit Xml file (with the value primary key) - sopaui test where THOSe value of primarykey will be used advantage: A automatic rollback is possible for DbUnit Data but i dont know how it will react because those data have been used with the Ws. 5- your suggestion ? better idea ? Testing those Ws create datas in the DB. How is the best way to remove them at the end ? (in how daostest we were using a automatic rollback with spring test to let the database clean) Thanks in advance for your hep & sorry for my english, it is not my mother thongue. Regards.

    Read the article

  • Executing procedural queries in perl

    - by KalpaG
    I have a query like this set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id as project, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11; it executes fine in heidi (MySQL client) but when it comes to perl, it gives me this error DBD::mysql::st execute failed: You have an error in your SQL syntax; check the m anual that corresponds to your MySQL server version for the right syntax to use near ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 at D:\Mx\scripts\test.pl line 35. Can't execute SQL statement: You have an error in your SQL syntax; check the man ual that corresponds to your MySQL server version for the right syntax to use ne ar ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 The problem seem to be in the set @valid_total := 0; line. I am fairly new to Perl. Can anyone help? this is the complete perl code #!/usr/bin/perl #use lib '/x01/home/kalpag/libs'; use DBI; use strict; use warnings; my $sid = 'issues'; my $user = 'root'; my $passwd = 'kalpa'; my $connection = "DBI:mysql:database=$sid;host=localhost"; my $dbhh = DBI->connect( $connection, $user, $passwd) || die "Database connection not made: $DBI::errstr"; my $sql_query = 'set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11'; my $sth = $dbhh->prepare($sql_query) or die "Can't prepare SQL statement: $DBI::errstr\n"; $sth->execute() or die "Can't execute SQL statement: $DBI::errstr\n"; while ( my @memory = $sth->fetchrow() ) { print "@memory \n"; }

    Read the article

  • Can't Set Crystal Report Selection Formula Programatically

    - by eidylon
    Hello all, first, I can't stand Crystal! Okay, that's off my chest... Now, we have an old VB6 app we maintain for a client, which uses the Crystal Automation library to programatically change the record selection formulas in a bunch of Crystal Reports 8.5 reports. There are two reports which are ALMOST identical. I had to change them recently to add another field from another table. When I added the table in to the reports though, while it added it in the visual designer, it did not add it in the FROM clause of the SQL statement. So, I manually edited the SQL statement to add in the additional join. KO, works great. If i run the reports in Crystal preview mode, they work exactly as expected. Now, the users went to test the changes from within the VB app. One of the reports works fine and dandy. The other report however, is failing to set the selection formula as expected. The code sets the selection formulas using the function PESetSelectionFormula. I verified that the string being passed in to the function as the new selection formula is correct via a step-through examination of the variables. The call to PESetSelectionFormula seems to be working okay, and is returning a value of 1, which as near as I can find anywhere indicates success. (The other report, which is working fine from code is also returning 1.) However, the report is failing with an error: Error Code: 534 - Error detected by database DLL. The code, for debugging purposes dumps out the SQL string currently being used by the report. The SQL coming out of the report is: SELECT ... FROM ... WHERE ORDER BY ... As you can see, the WHERE clause is blank, which I would imagine is why the database DLL is upchucking on this statement. I don't understand why the automation library is not setting the WHERE clause even though the call to PESetSelectionFormula is being passed a valid string and is returning success. I thought perhaps it was because I had manually edited the SQL in the report to add the table it wasn't adding, but I did the same thing in the other nearly identical report, and that one is working fine. Anyone have any ideas why PESetSelectionFormula might report success but not actually do anything? P.S. I have already tried doing a Database Verify Database from the menus, and that said the report was all up to date and did not help at all.

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • Count number of queries executed by NHibernate in a unit test

    - by Bittercoder
    In some unit/integration tests of the code we wish to check that correct usage of the second level cache is being employed by our code. Based on the code presented by Ayende here: http://ayende.com/Blog/archive/2006/09/07/MeasuringNHibernatesQueriesPerPage.aspx I wrote a simple class for doing just that: public class QueryCounter : IDisposable { CountToContextItemsAppender _appender; public int QueryCount { get { return _appender.Count; } } public void Dispose() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; logger.RemoveAppender(_appender); } public static QueryCounter Start() { var logger = (Logger) LogManager.GetLogger("NHibernate.SQL").Logger; lock (logger) { foreach (IAppender existingAppender in logger.Appenders) { if (existingAppender is CountToContextItemsAppender) { var countAppender = (CountToContextItemsAppender) existingAppender; countAppender.Reset(); return new QueryCounter {_appender = (CountToContextItemsAppender) existingAppender}; } } var newAppender = new CountToContextItemsAppender(); logger.AddAppender(newAppender); logger.Level = Level.Debug; logger.Additivity = false; return new QueryCounter {_appender = newAppender}; } } public class CountToContextItemsAppender : IAppender { int _count; public int Count { get { return _count; } } public void Close() { } public void DoAppend(LoggingEvent loggingEvent) { if (string.Empty.Equals(loggingEvent.MessageObject)) return; _count++; } public string Name { get; set; } public void Reset() { _count = 0; } } } With intended usage: using (var counter = QueryCounter.Start()) { // ... do something Assert.Equal(1, counter.QueryCount); // check the query count matches our expectations } But it always returns 0 for Query count. No sql statements are being logged. However if I make use of Nhibernate Profiler and invoke this in my test case: NHibernateProfiler.Intialize() Where NHProf uses a similar approach to capture logging output from NHibernate for analysis via log4net etc. then my QueryCounter starts working. It looks like I'm missing something in my code to get log4net configured correctly for logging nhibernate sql ... does anyone have any pointers on what else I need to do to get sql logging output from Nhibernate?

    Read the article

  • Importing a large delimited file to a MySQL table

    - by Tom
    I have this large (and oddly formatted txt file) from the USDA's website. It is the NUT_DATA.txt file. But the problem is that it is almost 27mb! I was successful in importing the a few other smaller files, but my method was using file_get_contents which it makes sense why an error would be thrown if I try to snag 27+ mb of RAM. So how can I import this massive file to my MySQL DB without running into a timeout and RAM issue? I've tried just getting one line at a time from the file, but this ran into timeout issue. Using PHP 5.2.0. Here is the old script (the fields in the DB are just numbers because I could not figure out what number represented what nutrient, I found this data very poorly document. Sorry about the ugliness of the code): <? $file = "NUT_DATA.txt"; $data = split("\n", file_get_contents($file)); // split each line $link = mysql_connect("localhost", "username", "password"); mysql_select_db("database", $link); for($i = 0, $e = sizeof($data); $i < $e; $i++) { $sql = "INSERT INTO `USDA` (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17) VALUES("; $row = split("\^", trim($data[$i])); // split each line by carrot for ($j = 0, $k = sizeof($row); $j < $k; $j++) { $val = trim($row[$j], '~'); $val = (empty($val)) ? 0 : $val; $sql .= ((empty($val)) ? 0 : $val) . ','; // this gets rid of those tildas and replaces empty strings with 0s } $sql = rtrim($sql, ',') . ");"; mysql_query($sql) or die(mysql_error()); // query the db } echo "Finished inserting data into database.\n"; mysql_close($link); ?>

    Read the article

  • whats wrong with this php mysql_real_escape_string

    - by skyhigh
    Hi Atomic Number Latin English Abbreviation * check the variables for content */ /*** a list of filters ***/ $filters = array( 'searchtext' => array( 'filter' => FILTER_CALLBACK, 'options' => 'mysql_real_escape_string'), 'fieldname' => array( 'filter' => FILTER_CALLBACK, 'options' => 'mysql_real_escape_string') ); /*** escape all POST variables ***/ $input = filter_input_array(INPUT_POST, $filters); /*** check the values are not empty ***/ if(empty($input['fieldname']) || empty($input['searchtext'])) { echo 'Invalid search'; } else { /*** mysql hostname ***/ $hostname = 'localhost'; /*** mysql username ***/ $username = 'username'; /*** mysql password ***/ $password = 'password'; /*** mysql database name ***/ $dbname = 'periodic_table'; /*** connect to the database ***/ $link = @mysql_connect($hostname, $username, $password); /*** check if the link is a valid resource ***/ if(is_resource($link)) { /*** select the database we wish to use ***/ if(mysql_select_db($dbname, $link) === TRUE) { /*** sql to SELECT information***/ $sql = sprintf("SELECT * FROM elements WHERE %s = '%s'", $input['fieldname'], $input['searchtext']); /*** echo the sql query ***/ echo '<h3>'.$sql.'</h3>'; /*** run the query ***/ $result = mysql_query($sql); /*** check if the result is a valid resource ***/ if(is_resource($result)) { /*** check if we have more than zero rows ***/ if(mysql_num_rows($result) !== 0) { echo '<table>'; while($row=mysql_fetch_array($result)) { echo '<tr> <td>'.$row['atomicnumber'].'</td> <td>'.$row['latin'].'</td> <td>'.$row['english'].'</td> <td>'.$row['abbr'].'</td> </tr>'; } echo '</table>'; } else { /*** if zero results are found.. ***/ echo 'Zero results found'; } } else { /*** if the resource is not valid ***/ 'No valid resource found'; } } /*** if we are unable to select the database show an error ****/ else { echo 'Unable to select database '.$dbname; } /*** close the connection ***/ mysql_close($link); } else { /*** if we fail to connect ***/ echo 'Unable to connect'; } } } else { echo 'Please Choose An Element'; } ? I got this code from phppro.org tutorials site and i tried to run it. It gives Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: A link to the server could not be established. .... Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user 'ODBC'@'localhost' (using password: NO).... I went to php.net and look it up "Note: A MySQL connection is required before using mysql_real_escape_string() otherwise an error of level E_WARNING is generated, and FALSE is returned. If link_identifier isn't defined, the last MySQL connection is used." My questions are: 1-why they put single quotation around mysql_real_escape_string ? 2-They should establish a connection first, then use the $filter array statement with mysql_real_escape_string ?

    Read the article

  • Converting this code from ASP to PHP

    - by jethomas
    I'll admit I'm a novice programmer and really the only experience I have is in classic ASP. I'm looking for a way to convert this asp code to PHP. For a customer who only has access to a linux box but also as a learning tool for me. Thanks in advance for the help: Recordset and Function: Function pd(n, totalDigits) if totalDigits > len(n) then pd = String(totalDigits-len(n),"0") & n else pd = n end if End Function 'declare the variables Dim Connection Dim Recordset Dim SQL Dim SQLDate SQLDate = Year(Date)& "-" & pd(Month(Date()),2)& "-" & pd(Day(Date()),2) 'declare the SQL statement that will query the database SQL = "SELECT * FROM tblXYZ WHERE element_8 = 2 AND element_9 > '" & SQLDate &"'" 'create an instance of the ADO connection and recordset objects Set Connection = Server.CreateObject("ADODB.Connection") Set Recordset = Server.CreateObject("ADODB.Recordset") 'open the connection to the database Connection.Open "PROVIDER=MSDASQL;DRIVER={MySQL ODBC 5.1 Driver};SERVER=localhost;UID=xxxxx;PWD=xxxxx;database=xxxxx;Option=3;" 'Open the recordset object executing the SQL statement and return records Recordset.Open SQL,Connection Display page/loop: Dim counter counter = 0 While Not Recordset.EOF counter = counter + 1 response.write("<div><td width='200' valign='top' align='center'><a href='" & Recordset("element_6") & "' style='text-decoration: none;'><div id='ad_header'>" & Recordset("element_3") & "</div><div id='store_name' valign='bottom'>" & Recordset("element_5") & "</div><img id='photo-small-img' src='http://xyz.com/files/" & Recordset("element_7") & "' /><br /><div id='ad_details'>"& Recordset("element_4") & "</div></a></td></div>") Recordset.MoveNext If counter = 3 Then Response.Write("</tr><tr>") counter = 0 End If Wend

    Read the article

  • PHP uploads and checking

    - by user147685
    Hi all, I want to update/insert file by using upload box to the database but before that it will check for the file type thats only pdf can be upload. Somthing wrong with the codes and i dont know what..Please help here are part of my updates codes: $id=$rs['id']; $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE a.ptkid=$id and b.id=$id"; $sql = get_records_sql($qry); if($_POST['check']){ $ext = pathinfo($faillampiran,PATHINFO_EXTENSION); $err = "Upload Only PDF File. "; if ($ext =='pdf'){ $qry="UPDATE {$CFG->prefix}ptk_lampiran SET faillampiran='".$faillampiran."' WHERE ptkid='".$_GET['id']."'"; $sql=mysql_query($qry); $qry = "SELECT a.faillampiran FROM {$CFG->prefix}ptk_lampiran a, {$CFG->prefix}ptk b WHERE b.id = '".$rs[id]."' AND a.ptkid = '".$rs[id]."' "; $sql = get_records_sql($qry); foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <tr><td></td><td></td><td> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> </td></tr> } else { echo "<script>alert('$err')</script>"; } } else { foreach($sql as $rs){ ?> <?=basename($rs->faillampiran); ?><br> <? } ?> <input type="file" size="50" name="faillampiran" alt="faillampiran" value= "<?=$faillampiran;?>" /> <input type="submit" name="edit" value="Muat naik fail ini" /><br /> <?} Sori bout the messiness of the codes, im still a beginner in this languages. thx

    Read the article

  • SSIS - user variable used in derived column transform is not available - in some cases

    - by soo
    Unfortunately I don't have a repro for my issue, but I thought I would try to describe it in case it sounds familiar to someone... I am using SSIS 2005, SP2. My package has a package-scope user variable - let's call it user_var first step in the control flow is an Execute SQL task which runs a stored procedure. All that SP does is insert a record in a SQL table (with an identity column) and then go back and get the max ID value. The Execute SQL task saves this output into user_var the control flow then has a Data Flow Task - it goes and gets some source data, has a derived column which sets a column called run_id to user_var - and saves the data to a SQL destination In most cases (this template is used for many packages, running every day) this all works great. All of the destination records created get set with a correct run_id. However, in some cases, there is a set of the destination data that does not get run_id equal to user_var, but instead gets a value of 0 (0 is the default value for user_var). I have 2 instances where this has happened, but I can't make it happen. In both cases, it was just less that 10,000 records that have run_id = 0. Since SSIS writes data out in 10,000 record blocks, this really makes me think that, for the first set of data written out, user_var was not yet set. Then, after that first block, for the rest of the data, run_id is set to a correct value. But control passed on to my data flow from the Execute SQL task - it would have seemed reasonable to me that it wouldn't go on until the SP has completed and user_var is set. Maybe it just runs the SP, but doesn't wait for it to complete? In both cases where this has happened there seemed to be a few packages hitting the table to get a new user_var at about the same time. And in both cases lots of data was written (40 million rows, 60 million rows) - my thinking is that that means the writes were happening for a while. Sorry to be both long-winded AND vague. A winning combination! Does this sound familiar to anyone? Thanks.

    Read the article

  • MERGE Bug with Filtered Indexes

    - by Paul White
    A MERGE statement can fail, and incorrectly report a unique key violation when: The target table uses a unique filtered index; and No key column of the filtered index is updated; and A column from the filtering condition is updated; and Transient key violations are possible Example Tables Say we have two tables, one that is the target of a MERGE statement, and another that contains updates to be applied to the target.  The target table contains three columns, an integer primary key, a single character alternate key, and a status code column.  A filtered unique index exists on the alternate key, but is only enforced where the status code is ‘a’: CREATE TABLE #Target ( pk integer NOT NULL, ak character(1) NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) );   CREATE UNIQUE INDEX uq1 ON #Target (ak) INCLUDE (status_code) WHERE status_code = 'a'; The changes table contains just an integer primary key (to identify the target row to change) and the new status code: CREATE TABLE #Changes ( pk integer NOT NULL, status_code character(1) NOT NULL,   PRIMARY KEY (pk) ); Sample Data The sample data for the example is: INSERT #Target (pk, ak, status_code) VALUES (1, 'A', 'a'), (2, 'B', 'a'), (3, 'C', 'a'), (4, 'A', 'd');   INSERT #Changes (pk, status_code) VALUES (1, 'd'), (4, 'a');          Target                     Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ a           ¦    ¦  1 ¦ d           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ d           ¦ +-----------------------+ The target table’s alternate key (ak) column is unique, for rows where status_code = ‘a’.  Applying the changes to the target will change row 1 from status ‘a’ to status ‘d’, and row 4 from status ‘d’ to status ‘a’.  The result of applying all the changes will still satisfy the filtered unique index, because the ‘A’ in row 1 will be deleted from the index and the ‘A’ in row 4 will be added. Merge Test One Let’s now execute a MERGE statement to apply the changes: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; The MERGE changes the two target rows as expected.  The updated target table now contains: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦ <—changed from ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦ <—changed from ‘d’ +-----------------------+ Merge Test Two Now let’s repopulate the changes table to reverse the updates we just performed: TRUNCATE TABLE #Changes;   INSERT #Changes (pk, status_code) VALUES (1, 'a'), (4, 'd'); This will change row 1 back to status ‘a’ and row 4 back to status ‘d’.  As a reminder, the current state of the tables is:          Target                        Changes +-----------------------+    +------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ d           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+ ¦  4 ¦ A  ¦ a           ¦ +-----------------------+ We execute the same MERGE statement: MERGE #Target AS t USING #Changes AS c ON c.pk = t.pk WHEN MATCHED AND c.status_code <> t.status_code THEN UPDATE SET status_code = c.status_code; However this time we receive the following message: Msg 2601, Level 14, State 1, Line 1 Cannot insert duplicate key row in object 'dbo.#Target' with unique index 'uq1'. The duplicate key value is (A). The statement has been terminated. Applying the changes using UPDATE Let’s now rewrite the MERGE to use UPDATE instead: UPDATE t SET status_code = c.status_code FROM #Target AS t JOIN #Changes AS c ON t.pk = c.pk WHERE c.status_code <> t.status_code; This query succeeds where the MERGE failed.  The two rows are updated as expected: +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦ ¦  1 ¦ A  ¦ a           ¦ <—changed back to ‘a’ ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ d           ¦ <—changed back to ‘d’ +-----------------------+ What went wrong with the MERGE? In this test, the MERGE query execution happens to apply the changes in the order of the ‘pk’ column. In test one, this was not a problem: row 1 is removed from the unique filtered index by changing status_code from ‘a’ to ‘d’ before row 4 is added.  At no point does the table contain two rows where ak = ‘A’ and status_code = ‘a’. In test two, however, the first change was to change row 1 from status ‘d’ to status ‘a’.  This change means there would be two rows in the filtered unique index where ak = ‘A’ (both row 1 and row 4 meet the index filtering criteria ‘status_code = a’). The storage engine does not allow the query processor to violate a unique key (unless IGNORE_DUP_KEY is ON, but that is a different story, and doesn’t apply to MERGE in any case).  This strict rule applies regardless of the fact that if all changes were applied, there would be no unique key violation (row 4 would eventually be changed from ‘a’ to ‘d’, removing it from the filtered unique index, and resolving the key violation). Why it went wrong The query optimizer usually detects when this sort of temporary uniqueness violation could occur, and builds a plan that avoids the issue.  I wrote about this a couple of years ago in my post Beware Sneaky Reads with Unique Indexes (you can read more about the details on pages 495-497 of Microsoft SQL Server 2008 Internals or in Craig Freedman’s blog post on maintaining unique indexes).  To summarize though, the optimizer introduces Split, Filter, Sort, and Collapse operators into the query plan to: Split each row update into delete followed by an inserts Filter out rows that would not change the index (due to the filter on the index, or a non-updating update) Sort the resulting stream by index key, with deletes before inserts Collapse delete/insert pairs on the same index key back into an update The effect of all this is that only net changes are applied to an index (as one or more insert, update, and/or delete operations).  In this case, the net effect is a single update of the filtered unique index: changing the row for ak = ‘A’ from pk = 4 to pk = 1.  In case that is less than 100% clear, let’s look at the operation in test two again:          Target                     Changes                   Result +-----------------------+    +------------------+    +-----------------------+ ¦ pk ¦ ak ¦ status_code ¦    ¦ pk ¦ status_code ¦    ¦ pk ¦ ak ¦ status_code ¦ ¦----+----+-------------¦    ¦----+-------------¦    ¦----+----+-------------¦ ¦  1 ¦ A  ¦ d           ¦    ¦  1 ¦ d           ¦    ¦  1 ¦ A  ¦ a           ¦ ¦  2 ¦ B  ¦ a           ¦    ¦  4 ¦ a           ¦    ¦  2 ¦ B  ¦ a           ¦ ¦  3 ¦ C  ¦ a           ¦    +------------------+    ¦  3 ¦ C  ¦ a           ¦ ¦  4 ¦ A  ¦ a           ¦                            ¦  4 ¦ A  ¦ d           ¦ +-----------------------+                            +-----------------------+ From the filtered index’s point of view (filtered for status_code = ‘a’ and shown in nonclustered index key order) the overall effect of the query is:   Before           After +---------+    +---------+ ¦ pk ¦ ak ¦    ¦ pk ¦ ak ¦ ¦----+----¦    ¦----+----¦ ¦  4 ¦ A  ¦    ¦  1 ¦ A  ¦ ¦  2 ¦ B  ¦    ¦  2 ¦ B  ¦ ¦  3 ¦ C  ¦    ¦  3 ¦ C  ¦ +---------+    +---------+ The single net change there is a change of pk from 4 to 1 for the nonclustered index entry ak = ‘A’.  This is the magic performed by the split, sort, and collapse.  Notice in particular how the original changes to the index key (on the ‘ak’ column) have been transformed into an update of a non-key column (pk is included in the nonclustered index).  By not updating any nonclustered index keys, we are guaranteed to avoid transient key violations. The Execution Plans The estimated MERGE execution plan that produces the incorrect key-violation error looks like this (click to enlarge in a new window): The successful UPDATE execution plan is (click to enlarge in a new window): The MERGE execution plan is a narrow (per-row) update.  The single Clustered Index Merge operator maintains both the clustered index and the filtered nonclustered index.  The UPDATE plan is a wide (per-index) update.  The clustered index is maintained first, then the Split, Filter, Sort, Collapse sequence is applied before the nonclustered index is separately maintained. There is always a wide update plan for any query that modifies the database. The narrow form is a performance optimization where the number of rows is expected to be relatively small, and is not available for all operations.  One of the operations that should disallow a narrow plan is maintaining a unique index where intermediate key violations could occur. Workarounds The MERGE can be made to work (producing a wide update plan with split, sort, and collapse) by: Adding all columns referenced in the filtered index’s WHERE clause to the index key (INCLUDE is not sufficient); or Executing the query with trace flag 8790 set e.g. OPTION (QUERYTRACEON 8790). Undocumented trace flag 8790 forces a wide update plan for any data-changing query (remember that a wide update plan is always possible).  Either change will produce a successfully-executing wide update plan for the MERGE that failed previously. Conclusion The optimizer fails to spot the possibility of transient unique key violations with MERGE under the conditions listed at the start of this post.  It incorrectly chooses a narrow plan for the MERGE, which cannot provide the protection of a split/sort/collapse sequence for the nonclustered index maintenance. The MERGE plan may fail at execution time depending on the order in which rows are processed, and the distribution of data in the database.  Worse, a previously solid MERGE query may suddenly start to fail unpredictably if a filtered unique index is added to the merge target table at any point. Connect bug filed here Tests performed on SQL Server 2012 SP1 CUI (build 11.0.3321) x64 Developer Edition © 2012 Paul White – All Rights Reserved Twitter: @SQL_Kiwi Email: [email protected]

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Windows Azure Learning Plan - Security

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.   General Security Information Overview and general  information about Windows Azure Security - what it is, how it works, and where you can learn more. General Security Whitepaper – answers most questions http://blogs.msdn.com/b/usisvde/archive/2010/08/10/security-white-paper-on-windows-azure-answers-many-faq.aspx Windows Azure Security Notes from the Patterns and Practices site http://blogs.msdn.com/b/jmeier/archive/2010/08/03/now-available-azure-security-notes-pdf.aspx Overview of Azure Security http://www.windowsecurity.com/articles/Microsoft-Azure-Security-Cloud.html Azure Security Resources http://reddevnews.com/articles/2010/08/19/microsoft-releases-windows-azure-security-resources.aspx Cloud Computing Security Considerations http://www.microsoft.com/downloads/en/details.aspx?FamilyID=68fedf9c-1c27-4642-aa5b-0a34472303ea&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Security in Cloud Computing – a Microsoft Perspective http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7c8507e8-50ca-4693-aa5a-34b7c24f4579&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+MicrosoftDownloadCenter+%28Microsoft+Download+Center Physical Security for Microsoft’s Online Computing Information on the Infrastructure and Locations for Azure Physical Security. The Global Foundation Services Group at Microsoft handles physical security http://www.globalfoundationservices.com/security/index.html Microsoft’s Security Response Center http://www.microsoft.com/security/msrc/ Software Security for Microsoft’s Online Computing Steps we take as a company to develop secure software Windows Azure is developed using the Trustworthy Computing Initiative http://www.microsoft.com/about/twc/en/us/default.aspx and  http://msdn.microsoft.com/en-us/library/ms995349.aspx Identity and Access in the Cloud http://blogs.msdn.com/b/technology_titbits_by_rajesh_makhija/archive/2010/10/29/identity-and-access-in-the-cloud.aspx Security Steps you should take While Microsoft takes great pains to secure the infrastructure, platform and code for Windows Azure, you have a responsibility to write secure code. These pointers can help you do that. Securing your cloud architecture, step-by-step http://technet.microsoft.com/en-us/magazine/gg296364.aspx Security Guidelines for Windows Azure http://redmondmag.com/articles/2010/06/15/microsoft-issues-security-guidelines-for-windows-azure.aspx  Best Practices for Windows Azure Security http://blogs.msdn.com/b/vbertocci/archive/2010/06/14/security-best-practices-for-developing-windows-azure-applications.aspx Active Directory and Windows Azure http://blogs.msdn.com/b/plankytronixx/archive/2010/10/22/projecting-your-active-directory-identity-to-the-azure-cloud.aspx Understanding Encryption (great overview and tutorial) http://blogs.msdn.com/b/plankytronixx/archive/2010/10/23/crypto-primer-understanding-encryption-public-private-key-signatures-and-certificates.aspx Securing your Connection Strings (SQL Azure) http://blogs.msdn.com/b/sqlazure/archive/2010/09/07/10058942.aspx Getting started with Windows Identity Foundation (WIF) quickly http://blogs.msdn.com/b/alikl/archive/2010/10/26/windows-identity-foundation-wif-fast-track.aspx

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Windows Azure Use Case: Fast Acquisitions

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Many organizations absorb, take over or merge with other organizations. In these cases, one of the most difficult parts of the process is the merging or changing of the IT systems that the employees use to do their work, process payments, and even get paid. Normally this means that the two companies have disparate systems, and several approaches can be used to have the two organizations use technology between them. An organization may choose to retain both systems, and manage them separately. The advantage here is speed, and keeping the profit/loss sheets separate. Another choice is to slowly “sunset” or stop using one organization’s system, and cutting to the other system immediately or at a later date. Although a popular choice, one of the most difficult methods is to extract data and processes from one system and import it into the other. Employees at the transitioning system have to be trained on the new one, the data must be examined and cleansed, and there is inevitable disruption when this happens. Still another option is to integrate the systems. This may prove to be as much work as a transitional strategy, but may have less impact on the users or the balance sheet. Implementation: A distributed computing paradigm can be a good strategic solution to most of these strategies. Retaining both systems is made more simple by allowing the users at the second organization immediate access to the new system, because security accounts can be created quickly inside an application. There is no need to set up a VPN or any other connections than just to the Internet. Having the users stop using one system and start with the other is also simple in Windows Azure for the same reason. Extracting data to Azure holds the same limitations as an on-premise system, and may even be more problematic because of the large data transfers that might be required. In a distributed environment, you pay for the data transfer, so a mixed migration strategy is not recommended. However, if the data is slowly migrated over time with a defined cutover, this can be an effective strategy. If done properly, an integration strategy works very well for a distributed computing environment like Windows Azure. If the Azure code is architected as a series of services, then endpoints can expose the service into and out of not only the Azure platform, but internally as well. This is a form of the Hybrid Application use-case documented here. References: Designing for Cloud Optimized Architecture: http://blogs.msdn.com/b/dachou/archive/2011/01/23/designing-for-cloud-optimized-architecture.aspx 5 Enterprise steps for adopting a Platform as a Service: http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0

    Read the article

  • How many people will be with you during 24HOP?

    - by Rob Farley
    In less than a week, SQLPASS hosts another 24 Hours of PASS event, this time with an array of 24 female speakers (in honour of this month being Women’s History Month). Interestingly, the committee has had a few people ask if there are rules about how the event can be viewed, such as “How many people from any one organisation can watch it?” or “Does it matter if a few people are crowded around the same screen?” From a licensing and marketing perspective, there is value in knowing how many people are watching the event, but there are no restrictions about how the thing is viewed. In fact – if you’re planning to watch any of these events, I want to suggest an idea: Book a meeting room in your office with a projector, and watch 24HOP in there. If you’re planning to have it streaming in the background while you work, obviously this makes life a bit trickier. But if you’re planning to treat it as a training event (a 2-day conference if you like) and block out a bit of time for it (as well you should – there’s going to be some great stuff in there), then why not do it in a way that makes it so that other people can see that you’re watching it, and potentially join you. When an event like this runs, we can see how many different ‘people’ are attending each LiveMeeting session. What we can’t tell is how many actual people there are represented. Jessica Moss spoke to the Adelaide SQL Server User Group a few weeks ago via LiveMeeting, and LiveMeeting told us there were less than a dozen people attending. Really there were at least three times that number, because all the people in the room with me weren’t included. I’d love to imagine that every LiveMeeting attendee represented a crowd in a room, watching a shared screen. So there’s my challenge – don’t let your LiveMeeting session represent just you. Find a way of involving other people. At the very least, you’ll be able to discuss it with them afterwards. Now stick a comment on this post to let me know how many people are going to be joining you. :) If you’re not registered for the event yet, get yourself over to the SQLPASS site and make it happen.

    Read the article

  • #MIX Day 2 Keynote: Put the Phone Down and Listen

    - by andrewbrust
    MIX day 1’s keynote was all about Windows Phone 7 (WP7).  MIX day 2’s was a reminder that Microsoft has much more going on than a new mobile platform.  Steven Sinofsky, Scott Guthrie, Doug Purdy and others showed us lots of other good things coming from Microsoft, mostly in the developer stack, that we certainly shouldn’t overlook.  These included the forthcoming IE9, its new JavaScript compiling engine and support for HTML 5 that takes full advantage of the local PC resources, including the Graphics Processing Unit.  The announcements also included important additions to ASP.NET (and one subtraction, in the form of lighter-weight ViewState technology) including almost-obsessive jQuery support.  That support is so good that John Resig, creator of the jQuery project, came on stage to tell us so.  Then Scott Guthrie told us that Microsoft would be contributing code to Open Source jQuery project. This is not your father’s Microsoft, it would seem. But to me, the crown jewel in today’s keynote were the numerous announcements around the Open Data Protocol (OData).  OData is nothing more than the protocol side of “Astoria” (now known as WCF Data Services, and until recently called ADO.NET Data Services) separated out and opened up as a platform-neutral standard.  The 2009 Professional Developers Conference (PDC) was Microsoft’s vehicle for first announcing OData, as well as project “Dallas,” an Azure-based cloud platform for publishing commercial OData feeds.  And we had already known about “bridges” for Astoria (and thus OData) for PHP and Java.  We also knew that PowerPivot, Microsoft’s forthcoming self-service BI plug-in for Excel 2010, will consume OData feeds and then facilitate drill-down analysis of their data.  And we recently found out that SQL Reporting Services reports (in the forthcoming SQL Server 2008 R2) and SharePoint 2010 lists will be consumable in OData format as well. So what was left to announce?  How about OData clients for Palm webOS and Apple iPhone/Objective C?  How about the release to Open Source of .NET’s OData client?  Or the ability to publish any SQL Azure database as an OData service by simply checking a checkbox at deployment?  Maybe even a Silverlight tool (code-named “Houston”) to create SQL Azure databases (and then publish them as OData) right in the browser?  And what if you you could get at NetFlix’s entire catalog in OData format?  You can – just go to http://odata.netflix.com/Catalog/ and see for yourself.  Douglas Purdy, who made these announcements said “we want OData to work on as many devices and platforms as possible.”  After all the cross-platform OData announcements made in about a half year’s time, it’s hard to dispute this. When Microsoft plays the data card, and plays it well, watch out, because data programmability is the company’s heritage.  I’ll be discussing OData at length in my April Redmond Review column.  I wrote that column two weeks ago, and was convinced then that OData was a big deal. Today upped the ante even more.  And following the Windows Phone 7 euphoria of yesterday was, I think, smart timing.  The phone, if it’s successful, will be because it’s a good developer platform play.  And developer platforms (as well as their creators) are most successful when they have a good data strategy.  OData is very Silverlight-friendly, and that means it’s WP7-friendly too.  Phone plus service-oriented data is a one-two punch.  A phone platform without data would have been a phone with no signal.

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Lies, damned lies, and statistics Part 2

    - by Maria Colgan
    There was huge interest in our OOW session last year on Managing Optimizer Statistics. It seems statistics and the maintenance of them continues to baffle people. In order to help dispel the mysteries surround statistics management we have created a two part white paper series on Optimizer statistics.  Part one of this series was released in November last years and describes in detail, with worked examples, the different concepts of Optimizer statistics. Today we have published part two of the series, which focuses on the best practices for gathering statistics, and examines specific use cases including, the fears that surround histograms and statistics management of volatile tables like Global Temporary Tables. Here is a quick look at the Introduction and the start of the paper. You can find the full paper here. Happy Reading! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif";} Introduction The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the Optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run. This necessary information is commonly referred to as Optimizer statistics. Understanding and managing Optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance. This whitepaper is the second of a two part series on Optimizer statistics. The first part of this series, Understanding Optimizer Statistics, focuses on the concepts of statistics and will be referenced several times in this paper as a source of additional information. This paper will discuss in detail, when and how to gather statistics for the most common scenarios seen in an Oracle Database. The topics are · How to gather statistics · When to gather statistics · Improving the efficiency of gathering statistics · When not to gather statistics · Gathering other types of statistics How to gather statistics The preferred method for gathering statistics in Oracle is to use the supplied automatic statistics-gathering job. Automatic statistics gathering job The job collects statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object bases. You can find the full paper here. Happy Reading! +Maria Colgan

    Read the article

< Previous Page | 833 834 835 836 837 838 839 840 841 842 843 844  | Next Page >