Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 595/2478 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • modalpopupextender.Show() wont fire

    - by Peter Lea
    I'm pretty new to developing for the web so bare with me. I have a company page with multiple locations and emails etc at each of these addresses. The idea is to have a single modalpopup to edit each type of data (one for email, one for urls, one for addresses etc). I link the modalpopupextender to a hiddenbutton and then call an edit function from various places where I can populate some hiddenfields and textboxes in the panel before showing it. The code executes but it just wont show the damn popup, I just see a flash and can't figure out if its my panel, my css or something I don't understand about ajax and postbacks etc. Things i've tried after reading various threads: Disable smart navigation in web.config Move ToolKitScriptManager up to master page and use proxy in content set hiddenbutton to use style="display:none" tried links etc instead of hidden button Heres my code CSS .modalBackground { position: absolute; z-index: 100; top: 0px; left: 0px; background-color: #000; filter: alpha(opacity=60); -moz-opacity: 0.6; opacity: 0.6; } .modalPopup { background-color: #FFD; border-width: 3px; border-style: solid; border-color: gray; padding: 3px;} Asp/html <ajaxToolkit:ModalPopupExtender runat="server" ID="mpe_email" BackgroundCssClass="modalBackground" PopupControlID="modal_email" CancelControlID="btn_cancel_email" TargetControlID="fake_btn_email" /> <asp:Button ID="fake_btn_email" runat="server" Text="email" style="display:none;" /> <asp:panel id="modal_email" runat="server" class="modalPopup" Width="500px" Height="500px"> <asp:HiddenField ID="hf_modal_email_location_id" runat="server" Value="" /> <asp:HiddenField ID="hf_modal_email_contact_id" runat="server" Value="" /> <asp:HiddenField ID="hf_modal_email_comms_id" runat="server" Value="" /> <table width="100%"> <tr> <td> <asp:Label ID="lbl_mpe_email_title" runat="server" Text="Edit Email Address" /> </td> </tr> <tr> <td> <table width="100%"> <tr> <td width="40px"><img src="../images/email.png" height="30px" width="30px"/></td> <td> <table width="100px"> <tr> <td><span>Quick Ref: <asp:TextBox ID="txb_mpe_email_qref" runat="server" Text="" /></span></td> </tr> <tr> <td><span>Email Address: <asp:TextBox ID="txb_mpe_email_address_full" runat="server" Text="" /></span></td> </tr> </table> </td> </tr> </table> </td> </tr> <tr> <td width="40px" align="left"><asp:Button ID="btn_cancel_email" runat="server" Text="Cancel"/></td> <td align="right"><asp:Button ID="btn_save_email" runat="server" Text="Save" OnCommand="save_modal_email" /></td> </tr> <tr> <td colspan="2" align="right"><asp:Label ID="lbl_mpe_email_err" runat="server" Text="" /></td> </tr> </table> c# public void oloc_ocon_email_edit(object sender, RepeaterCommandEventArgs e) { switch (e.CommandName) { case "edit": hf_modal_email_location_id.Value = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_location_id")).Value; hf_modal_email_contact_id.Value = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_contact_id")).Value; hf_modal_email_comms_id.Value = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_comms_id")).Value; lbl_mpe_email_title.Text = "Edit Email Address"; txb_mpe_email_qref.Text = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_qref")).Value; txb_mpe_email_address_full.Text = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_email_full")).Value; lbl_mpe_email_err.Text = ""; mpe_email.Show(); break; case "new": hf_modal_email_location_id.Value = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_location_id_p")).Value; hf_modal_email_contact_id.Value = ((HiddenField)e.Item.FindControl("hf_oloc_ocon_emails_contact_id_p")).Value; hf_modal_email_comms_id.Value = "0"; lbl_mpe_email_title.Text = "New Email Address"; txb_mpe_email_qref.Text = ""; txb_mpe_email_address_full.Text = ""; lbl_mpe_email_err.Text = ""; mpe_email.Show(); break; } } Stuff makes so much more sense in a desktop environment, I hope someone can point me in the right direction. Thanks

    Read the article

  • SQL error C# - Parameter already defined

    - by jakesankey
    Hey there. I have a c# application that parses txt files and imports the data from them into a sql db. I was using sqlite and am now working on porting it to sql server. It was working fine with sqlite but now with sql i am getting an error when it is processing the files. It added the first row of data to the db and then says "parameter @PartNumber has already been declared. Variable names must be unique within a batch or stored procedure". Here is my whole code and SQL table layout ... the error comes at the last insertCommand.ExecuteNonQuery() instance at the end of the code... SQL TABLE: CREATE TABLE Import ( RowId int PRIMARY KEY IDENTITY, PartNumber text, CMMNumber text, Date text, FeatType text, FeatName text, Value text, Actual text, Nominal text, Dev text, TolMin text, TolPlus text, OutOfTol text, FileName text ); CODE: using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT DISTINCT FileName FROM Import; END"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data... done"); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\", "R303717*.txt*", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); foreach (string file in files.Except(ImportedFiles)) { string FileNameExt1 = Path.GetFileName(file); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'Import')) BEGIN SELECT COUNT(*) FROM Import WHERE FileName = @FileExt; END"; cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Parsing CMM data for SQL database... Please wait."); insertCommand.CommandText = @" INSERT INTO Import (FeatType, FeatName, Value, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Value, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Value", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; DateTime dateTime = DateTime.ParseExact(RNumberDate, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); // insertCommand.ExecuteNonQuery(); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • MySQL Group By and bracketing according to age.

    - by text
    Is that possible that MySQL can handle grouping of data according to age bracket? In my users table, age value is their actual age. I want to group them according to age bracket. For example: ages below 1 year old as age1, 1-4 yrs as age2, 5-9 yrs. old as age3 and so on.

    Read the article

  • How do I make the footer stretch vertically downward align to footer.

    - by text
    I am new to web design using tableless and I'm having problem positioning some elements on my page.. Here's the sample html: http://christianruado.comuf.com/sample.html As you can see from the screen shots I want my right div to be vertically stretched down to the same level of my footer and position my bottom element to the lowest part of the right container. #right { float:right; width: 19%; background:#FF3300; margin-left:2px; height: 100%; } #right .bottom { bottom:0; display:block; background-color:#FFCCFF; height:30px; }

    Read the article

  • CSS aligning side area down to the position of a footer.

    - by text
    Hi I was started designing a website using tableless. I was able to create and happy with the result, but in some point I'm having problem positioning some elements. see sample illustration here: http://christianruado.comuf.com/images/demo.jpg The right side which is green contains two elements the top and the bottom, my problem was how can set the height of the green container to always align with the footer? I want to align the bottom part to be aligned with the footer regardless the content of the center container?

    Read the article

  • MySQL subquery and bracketing

    - by text
    Here are my tables respondents: field sample value respondentid : 1 age : 2 gender : male survey_questions: id : 1 question : Q1 answer : sample answer answers: respondentid : 1 question : Q1 answer : 1 --id of survey question I want to display all respondents who answered the certain survey, display all answers and total all the answer and group them according to the age bracket. I tried using this query: SELECT res.Age, res.Gender, answer.id, answer.respondentid, SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males, SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females, CASE WHEN res.Age < 1 THEN 'age1' WHEN res.Age BETWEEN 1 AND 4 THEN 'age2' WHEN res.Age BETWEEN 4 AND 9 THEN 'age3' WHEN res.Age BETWEEN 10 AND 14 THEN 'age4' WHEN res.Age BETWEEN 15 AND 19 THEN 'age5' WHEN res.Age BETWEEN 20 AND 29 THEN 'age6' WHEN res.Age BETWEEN 30 AND 39 THEN 'age7' WHEN res.Age BETWEEN 40 AND 49 THEN 'age8' ELSE 'age9' END AS ageband FROM Respondents AS res INNER JOIN Answers as answer ON answer.respondentid=res.respondentid INNER JOIN Questions as question ON answer.Answer=question.id WHERE answer.Question='Q1' GROUP BY ageband ORDER BY res.Age ASC I was able to get the data but the listing of all answers are not present. Do I have to subquery SELECT into my current SELECT statement to show the answers? I want to produce something like this: ex: # of Respondents is 3 ages: 2,3 and 6 Question: what are your favorite subjects? Ages 1-4: subject 1: 1 subject 2: 2 subject 3: 2 total respondents for ages 1-4 : 2 Ages 5-10: subject 1: 1 subject 2: 1 subject 3: 0 total respondents for ages 5-10 : 1

    Read the article

  • appending images into center inside div using jquery

    - by text
    I am using crossSlide jquery plugin for my slideshow. My container is bigger than the images inside it so the tendency of the images is positioned into left. Is there a way to position them centered to the container? here's the sample site using crosSlide since i don't have a website to upload my sample page. http://www.hashbangcode.com/blog/crossslide-jquery-plugin-test-348.html

    Read the article

  • Referencing ASP.net textbox data in JavaScripts

    - by GoldenEarring
    I'm interested in making an interactive 3D pie chart using JavaScript and ASP.net controls for a webpage. Essentially, I want to make an interactive version of the chart here: https://google-developers.appspot.com/chart/interactive/docs/gallery/piechart#3D I want to have 5 ASP.net textboxes where the user enters data and then submits it, and the chart adjusts according to what the user enters. I understand using ASP.net controls with JS is probably not the most effective way to go about it, but I would really appreciate if someone could share how doing this would be possible. I really don't know where to begin. Thanks for any help! <%@ Page Language="C#" %> <!DOCTYPE html> <script runat="server"> void btn1_Click(object sender, EventArgs e) { double s = 0.0; double b = 0; double g = 0.0f; double c = 0.0f; double h = 0.0f; s = double.Parse(txtWork.Text); b = double.Parse(txtEat.Text); g = double.Parse(txtCommute.Text); c = double.Parse(txtWatchTV.Text); h = double.Parse(txtSleep.Text); double total = s + b + g + c + h; if (total != 24) { lblError.Text = "Warning! A day has 24 hours"; } if (total == 24) { lblError.Text = string.Empty; } } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title></title> <script type="text/javascript" src="https://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("visualization", "1", { packages: ["corechart"] }); google.setOnLoadCallback(drawChart); function drawChart() { var data = google.visualization.arrayToDataTable([ ['Task', 'Hours per Day'], ['Work', 11], ['Eat', 2], ['Commute', 2], ['Watch TV', 2], ['Sleep', 7] ]); var options = { title: 'My Daily Activities', is3D: true, }; var chart = new google.visualization.PieChart(document.getElementById('piechart_3d')); chart.draw(data, options); } var data = new google.visualization.DataTable(); var txtWork = document.getElementById('<%=txtWork.ClientID%>') txtEat = document.getElementById('<%=txtEat.ClientID%>') txtCommute = document.getElementById('<%=txtCommute.ClientID%>') txtWatchTV = document.getElementById('<%=txtWatchTV.ClientID%>') txtSleep = document.getElementById('<%=txtSleep.ClientID%>'); var workvalue = parseInt(txtWork, 10) var eatvalue = parseInt(txtEat, 10) var commutevalue = parseInt(txtCommute, 10) var watchtvvalue = parseInt(txtWatchTV, 10) var sleepvalue = parseInt(txtSleep, 10) // Declare columns data.addColumn('string', 'Task'); data.addColumn('Number', 'Hours per day'); // Add data. data.addRows([ ['Work', workvalue], ['Eat', eatvalue], ['Commute', commutevalue], ['Watch TV', watchtvvalue], ['Sleep', sleepvalue], ]); </script> </head> <body> <form id="form1" runat="server"> <div id="piechart_3d" style="width: 900px; height: 500px;"> </div> <asp:Label ID="lblError" runat="server" Font-Size="X-Large" Font-Bold="true" /> <table> <tr> <td>Work:</td> <td><asp:TextBox ID="txtWork" Text="11" runat="server" /></td> </tr> <tr> <td>Eat:</td> <td><asp:TextBox ID="txtEat" text="2" runat="server" /></td> </tr> <tr> <td>Commute:</td> <td><asp:TextBox ID="txtCommute" Text="2" runat="server" /></td> </tr> <tr> <td>Watch TV:</td> <td><asp:TextBox ID="txtWatchTV" Text="2" runat="server" /></td> </tr> <tr> <td>Sleep:</td> <td><asp:TextBox ID="txtSleep" Text="7" runat="server" /></td> </tr> </table> <br /> <br /> <asp:Button ID="btn1" text="Draw 3D PieChart" runat="server" OnClick="btn1_Click" /> </form> </body> </html>

    Read the article

  • How grouping and totaling are done into three tables using JOIN

    - by text
    Here are my tables respondents: field sample value respondentid : 1 age : 2 gender : male survey_questions: id : 1 question : Q1 answer : sample answer answers: respondentid : 1 question : Q1 answer : 1 --id of survey question I want to display all respondents who answered the certain survey, display all answers and total all the answer and group them according to the age bracket. I tried using this query: $sql = "SELECT res.Age, res.Gender, answer.id, answer.respondentid, SUM(CASE WHEN res.Gender='Male' THEN 1 else 0 END) AS males, SUM(CASE WHEN res.Gender='Female' THEN 1 else 0 END) AS females, CASE WHEN res.Age < 1 THEN 'age1' WHEN res.Age BETWEEN 1 AND 4 THEN 'age2' WHEN res.Age BETWEEN 4 AND 9 THEN 'age3' WHEN res.Age BETWEEN 10 AND 14 THEN 'age4' WHEN res.Age BETWEEN 15 AND 19 THEN 'age5' WHEN res.Age BETWEEN 20 AND 29 THEN 'age6' WHEN res.Age BETWEEN 30 AND 39 THEN 'age7' WHEN res.Age BETWEEN 40 AND 49 THEN 'age8' ELSE 'age9' END AS ageband FROM Respondents AS res INNER JOIN Answers as answer ON answer.respondentid=res.respondentid INNER JOIN Questions as question ON answer.Answer=question.id WHERE answer.Question='Q1' GROUP BY ageband ORDER BY res.Age ASC"; I was able to get the data but the listing of all answers are not present. What's wrong with my query. I want to produce something like this: ex: # of Respondents is 3 ages: 2,3 and 6 Question: what are your favorite subjects? Ages 1-4: subject 1: 1 subject 2: 2 subject 3: 2 total respondents for ages 1-4 : 2 Ages 5-10: subject 1: 1 subject 2: 1 subject 3: 0 total respondents for ages 5-10 : 1

    Read the article

  • How do I update mysql database when posting form without using hidden inputs?

    - by user1322707
    I have a "members" table in mysql which has approximately 200 field names. Each user is given up to 7 website templates with 26 different values they can insert unique data into for each template. Each time they create a template, they post the form with the 26 associated values. These 26 field names are the same for each template, but are differentiated by an integer at the end, ie _1, _2, ... _7. In the form submitting the template, I have a variable called $pid_sum which is inserted at the end of each field name to identify which template they are creating. For instance: <form method='post' action='create.template.php'> <input type='hidden' name='address_1' value='address_1'> <input type='hidden' name='city_1' value='city_1'> <input type='hidden' name='state_1' value='state_1'> etc... <input type='hidden' name='address_1' value='address_2'> <input type='hidden' name='city_1' value='city_2'> <input type='hidden' name='state_1' value='state_2'> etc... <input type='hidden' name='address_2' value='address_3'> <input type='hidden' name='city_2' value='city_3'> <input type='hidden' name='state_2' value='state_3'> etc... <input type='hidden' name='address_2' value='address_4'> <input type='hidden' name='city_2' value='city_4'> <input type='hidden' name='state_2' value='state_4'> etc... <input type='hidden' name='address_2' value='address_5'> <input type='hidden' name='city_2' value='city_5'> <input type='hidden' name='state_2' value='state_5'> etc... <input type='hidden' name='address_2' value='address_6'> <input type='hidden' name='city_2' value='city_6'> <input type='hidden' name='state_2' value='state_6'> etc... <input type='hidden' name='address_2' value='address_7'> <input type='hidden' name='city_2' value='city_7'> <input type='hidden' name='state_2' value='state_7'> etc... // Visible form user fills out in creating their template ($pid_sum converts // into an integer 1-7, depending on what template they are filling out) <input type='' name='address_$pid_sum'> <input type='' name='city_$pid_sum'> <input type='' name='state_$pid_sum'> etc... <input type='submit' name='save_button' id='save_button' value='Save Settings'> <form> Each of these need updated in a hidden input tag with each form post, or the values in the database table (which aren't submitted with the form) get deleted. So I am forced to insert approximately 175 hidden input tags with every creation of 26 new values for one of the 7 templates. Is there a PHP function or command that would enable me to update all these values without inserting 175 hidden input tags within each form post? Here is the create.template.php file which the form action calls: <?php $q=new Cdb; $t->set_file("content", "create_template.html"); $q2=new CDB; $query="SELECT menu_category FROM menus WHERE link='create.template.ag.php'"; $q2->query($query); $toall=0; if ($q2->nf()<1) { $toall=1; } while ($q2->next_record()) { if ($q2->f('menu_category')=="main") { $toall=1; } } if ($toall==0) { get_logged_info(); $q2=new CDB; $query="SELECT id FROM menus WHERE link='create_template.php'"; $q2->query($query); $q2->next_record(); $query="SELECT membership_id FROM menu_permissions WHERE menu_item='".$q2->f("id")."'"; $q2->query($query); while ($q2->next_record()) { $permissions[]=$q2->f("membership_id"); } if (count($permissions)>0) { $error='<center><font color="red"><b>You do not have access to this area!<br><br>Upgrade your membership level!</b></font></center>'; foreach ($permissions as $value) { if ($value==$q->f("membership_id")) { $error=''; break; } } if ($error!="") { die("$error"); } } } $member_id=$q->f("id"); $pid=$q->f("pid"); $pid_sum = $pid +1; $first_name=$q->f("first_name"); $last_name=$q->f("last_name"); $email=$q->f("email"); echo " // THIS IS WHERE THE HTML FORM GOES "; replace_tags_t($q->f("id"), $t); ?>

    Read the article

  • MySQL Joining three tables

    - by text
    I am doing a query with three tables, the problem was one table has many occurrences of id of another. sample data: users: id answers: id:1 user_answer :1 id:1 user_answer :2 id:1 user_answer :3 Questions: id:1 answers :answer description id:2 answers :answer description id:3 answers :answer description How can I get all user information and all answer and its description, I used GROUP by user.id but it only returns only one answer. I want to return something like this list all of users answer: Name Q1 Q2 USERNAME ans1,ans2 ans1,ans2 comma separated description of answer here

    Read the article

  • Can this Query be corrected or different table structure needed? (database dumps provided)

    - by sandeepan
    This is a bit lengthy but I have provided sufficient details and kept things very clear. Please see if you can help. (I will surely accept answer if it solves my problem) I am sure a person experienced with this can surely help or suggest me to decide the tables structure. About the system:- There are tutors who create classes A tags based search approach is being followed Tag relations are created/edited when new tutors registers/edits profile data and when tutors create classes (this makes tutors and classes searcheable).For simplicity, let us consider only tutor name and class name are the fields which are matched against search keywords. In this example, I am considering - tutor "Sandeepan Nath" has created a class called "first class" tutor "Bob Cratchit" has created a class called "new class" Desired search results- AND logic to be appied on the search keywords and match against class and tutor data(class name + tutor name), in other words, All those classes be shown such that all the search terms are present in the class name or its tutor name. Example to be clear - Searching "first class" returns class with id_wc = 1. Working Searching "Sandeepan class" should also return class with id_wc = 1. Not working in System 2. Problem with profile editing and searching To tell in one sentence, I am facing a conflict between the ease of profile edition (edition of tag relations when tutor profiles are edited) and the ease of search logic. In the beginning, we had one table structure and search was easy but tag edition logic was very clumsy and unmaintainable(Check System 1 in the section below) . So we created separate tag relations tables to make profile edition simpler but search has become difficult. Please dump the tables so that you can run the search query I have given below and see the results. System 1 (previous system - search easy - profile edition difficult):- Only one table called All_Tag_Relations table had the all the tag relations. The tags table below is common to both systems 1 and 2. CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `all_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 1, 1, NULL), (2, 2, 1, NULL), (3, 1, 1, 1), (4, 2, 1, 1), (5, 3, 1, 1), (6, 4, 1, 1), (7, 6, 2, NULL), (8, 7, 2, NULL), (9, 6, 2, 2), (10, 7, 2, 2), (11, 5, 2, 2), (12, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); Please note that for every class, the tag rels of its tutor have to be duplicated. Example, for class with id_wc=1, the tag rel records with id_tag_rel = 3 and 4 are actually extras if you compare with the tag rel records with id_tag_rel = 1 and 2. System 2 (present system - profile edition easy, search difficult) Two separate tables Tutors_Tag_Relations and Webclasses_Tag_Relations have the corresponding tag relations data (Please dump into a separate database)- CREATE TABLE IF NOT EXISTS `tutors_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `tutors_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`) VALUES (1, 1, 1), (2, 2, 1), (3, 6, 2), (4, 7, 2); CREATE TABLE IF NOT EXISTS `webclasses_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `webclasses_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`), KEY `id_tag` (`id_tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `webclasses_tag_relations` (`id_tag_rel`, `id_tag`, `id_tutor`, `id_wc`) VALUES (1, 3, 1, 1), (2, 4, 1, 1), (3, 5, 2, 2), (4, 4, 2, 2); CREATE TABLE IF NOT EXISTS `tags` ( `id_tag` int(10) unsigned NOT NULL AUTO_INCREMENT, `tag` varchar(255) DEFAULT NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`), FULLTEXT KEY `tag_5` (`tag`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; INSERT INTO `tags` (`id_tag`, `tag`) VALUES (1, 'Sandeepan'), (2, 'Nath'), (3, 'first'), (4, 'class'), (5, 'new'), (6, 'Bob'), (7, 'Cratchit'); CREATE TABLE IF NOT EXISTS `all_tag_relations` ( `id_tag_rel` int(10) NOT NULL AUTO_INCREMENT, `id_tag` int(10) unsigned NOT NULL DEFAULT '0', `id_tutor` int(10) DEFAULT NULL, `id_wc` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id_tag_rel`), KEY `All_Tag_Relations_FKIndex1` (`id_tag`), KEY `id_wc` (`id_wc`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; insert into All_Tag_Relations select NULL,id_tag,id_tutor,NULL from Tutors_Tag_Relations; insert into All_Tag_Relations select NULL,id_tag,id_tutor,id_wc from Webclasses_Tag_Relations; Here you can see how easily tutor first name can be edited only in one place. But search has become really difficult, so on being advised to use a Temporary table, I am creating one at every search request, then dumping all the necessary data and then searching from it, I am creating this All_Tag_Relations table at search run time. Here I am just dumping all the data from the two tables Tutors_Tag_Relations and Webclasses_Tag_Relations. But, I am still not able to get classes if I search with tutor name This is the query which searches "first class". Running them on both the systems shows correct results (returns the class with id_wc = 1). SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =3)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =3 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 But, searching for "Sandeepan class" works only with the 1st system Here is the query which searches "Sandeepan class" SELECT wtagrels.id_wc,SUM(DISTINCT( wtagrels.id_tag =1)) AS key_1_total_matches, SUM(DISTINCT( wtagrels.id_tag =4)) AS key_2_total_matches FROM all_tag_relations AS wtagrels WHERE ( wtagrels.id_tag =1 OR wtagrels.id_tag =4 ) GROUP BY wtagrels.id_wc HAVING key_1_total_matches = 1 AND key_2_total_matches = 1 LIMIT 0, 20 Can anybody alter this query and somehow do a proper join or something to get correct results. That solves my problem in a nice way. As you can figure out, the reason why it does not work in system 2 is that in system 1, for every class, one additional tag relation linking class and tutor name is present. e.g. for class first class, (records with id_tag_rel 3 and 4) which returns the class on searching with tutor name. So, you see the trade-off between the search and profile edition difficulty with the two systems. How do I overcome both. I have to reach a conclusion soon. So far my reasoning is it is definitely not good from a code maintainability point of view to follow the single tag rel table structure of system one, because in a real system while editing a field like "tutor qualifications", there can be as many records in tag rels table as there are words in qualification of a tutor (one word in a field = one tag relation). Now suppose a tutor has 100 classes. When he edits his qualification, all the tag rel rows corresponding to him are deleted and then as many copies are to be created (as per the new qualification data) as there are classes. This becomes particularly difficult if later more searcheable fields are added. The code cannot be robust. Is the best solution to follow system 2 (edition has to be in one table - no extra work for each and every class) and somehow re-create the all_tag_relations table like system 1 (from the tables tutor_tag_relations and webclasses_tag_relations), creating the extra tutor tag rels for each and every class by a tutor (which is currently missing in system 2's temporary all_tag_relations table). That would be a time consuming logic script. I doubt that table can be recreated without resorting to PHP sript (mysql alone cannot do that). But the problem is that running all this at search time will make search definitely slow. So, how do such systems work? How are such situations handled? I thought about we can run a cron which initiates that PHP script, say every 1 minute and replaces the existing all_tag_relations table as per new tag rels from tutor_tag_relations and webclasses_tag_relations (replaces means creates a new table, deletes the original and renames the new one as all_tag_relations, otherwise search won't work during that period- or is there any better way to that?). Anyway, the result would be that any changes by tutors will reflect in search in the next 1 minute and not immediately. An alternateve would be to initate that PHP script every time a tutor edits his profile. But here again, since many users may edit their profiles concurrently, will the creation of so many tables be a burden and can mysql make the server slow? Any help would be appreciated and working solution will be accepted as answer. Thanks, Sandeepan

    Read the article

  • Translating with Google Translate without API and C# Code

    - by Rick Strahl
    Some time back I created a data base driven ASP.NET Resource Provider along with some tools that make it easy to edit ASP.NET resources interactively in a Web application. One of the small helper features of the interactive resource admin tool is the ability to do simple translations using both Google Translate and Babelfish. Here's what this looks like in the resource administration form: When a resource is displayed, the user can click a Translate button and it will show the current resource text and then lets you set the source and target languages to translate. The Go button fires the translation for both Google and Babelfish and displays them - pressing use then changes the language of the resource to the target language and sets the resource value to the newly translated value. It's a nice and quick way to get a quick translation going. Ch… Ch… Changes Originally, both implementations basically did some screen scraping of the interactive Web sites and retrieved translated text out of result HTML. Screen scraping is always kind of an iffy proposition as content can be changed easily, but surprisingly that code worked for many years without fail. Recently however, Google at least changed their input pages to use AJAX callbacks and the page updates no longer worked the same way. End result: The Google translate code was broken. Now, Google does have an official API that you can access, but the API is being deprecated and you actually need to have an API key. Since I have public samples that people can download the API key is an issue if I want people to have the samples work out of the box - the only way I could even do this is by sharing my API key (not allowed).   However, after a bit of spelunking and playing around with the public site however I found that Google's interactive translate page actually makes callbacks using plain public access without an API key. By intercepting some of those AJAX calls and calling them directly from code I was able to get translation back up and working with minimal fuss, by parsing out the JSON these AJAX calls return. I don't think this particular Warning: This is hacky code, but after a fair bit of testing I found this to work very well with all sorts of languages and accented and escaped text etc. as long as you stick to small blocks of translated text. I thought I'd share it in case anybody else had been relying on a screen scraping mechanism like I did and needed a non-API based replacement. Here's the code: /// <summary> /// Translates a string into another language using Google's translate API JSON calls. /// <seealso>Class TranslationServices</seealso> /// </summary> /// <param name="Text">Text to translate. Should be a single word or sentence.</param> /// <param name="FromCulture"> /// Two letter culture (en of en-us, fr of fr-ca, de of de-ch) /// </param> /// <param name="ToCulture"> /// Two letter culture (as for FromCulture) /// </param> public string TranslateGoogle(string text, string fromCulture, string toCulture) { fromCulture = fromCulture.ToLower(); toCulture = toCulture.ToLower(); // normalize the culture in case something like en-us was passed // retrieve only en since Google doesn't support sub-locales string[] tokens = fromCulture.Split('-'); if (tokens.Length > 1) fromCulture = tokens[0]; // normalize ToCulture tokens = toCulture.Split('-'); if (tokens.Length > 1) toCulture = tokens[0]; string url = string.Format(@"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", HttpUtility.UrlEncode(text),fromCulture,toCulture); // Retrieve Translation with HTTP GET call string html = null; try { WebClient web = new WebClient(); // MUST add a known browser user agent or else response encoding doen't return UTF-8 (WTF Google?) web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0"); web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8"); // Make sure we have response encoding to UTF-8 web.Encoding = Encoding.UTF8; html = web.DownloadString(url); } catch (Exception ex) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.ConnectionFailed + ": " + ex.GetBaseException().Message; return null; } // Extract out trans":"...[Extracted]...","from the JSON string string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value; if (string.IsNullOrEmpty(result)) { this.ErrorMessage = Westwind.Globalization.Resources.Resources.InvalidSearchResult; return null; } //return WebUtils.DecodeJsString(result); // Result is a JavaScript string so we need to deserialize it properly JavaScriptSerializer ser = new JavaScriptSerializer(); return ser.Deserialize(result, typeof(string)) as string; } To use the code is straightforward enough - simply provide a string to translate and a pair of two letter source and target languages: string result = service.TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "de"); TestContext.WriteLine(result); How it works The code to translate is fairly straightforward. It basically uses the URL I snagged from the Google Translate Web Page slightly changed to return a JSON result (&client=j) instead of the funky nested PHP style JSON array that the default returns. The JSON result returned looks like this: {"sentences":[{"trans":"Das Leben ist großartig und man wird verwöhnt, wenn es weiter und weiter und weiter geht","orig":"Life is great and one is spoiled when it goes on and on and on","translit":"","src_translit":""}],"src":"en","server_time":24} I use WebClient to make an HTTP GET call to retrieve the JSON data and strip out part of the full JSON response that contains the actual translated text. Since this is a JSON response I need to deserialize the JSON string in case it's encoded (for upper/lower ASCII chars or quotes etc.). Couple of odd things to note in this code: First note that a valid user agent string must be passed (or at least one starting with a common browser identification - I use Mozilla/5.0). Without this Google doesn't encode the result with UTF-8, but instead uses a ISO encoding that .NET can't easily decode. Google seems to ignore the character set header and use the user agent instead which is - odd to say the least. The other is that the code returns a full JSON response. Rather than use the full response and decode it into a custom type that matches Google's result object, I just strip out the translated text. Yeah I know that's hacky but avoids an extra type and firing up the JavaScript deserializer. My internal version uses a small DecodeJsString() method to decode Javascript without the overhead of a full JSON parser. It's obviously not rocket science but as mentioned above what's nice about it is that it works without an Google API key. I can't vouch on how many translates you can do before there are cut offs but in my limited testing running a few stress tests on a Web server under load I didn't run into any problems. Limitations There are some restrictions with this: It only works on single words or single sentences - multiple sentences (delimited by .) are cut off at the ".". There is also a length limitation which appears to happen at around 220 characters or so. While that may not sound  like much for typical word or phrase translations this this is plenty of length. Use with a grain of salt - Google seems to be trying to limit their exposure to usage of the Translate APIs so this code might break in the future, but for now at least it works. FWIW, I also found that Google's translation is not as good as Babelfish, especially for contextual content like sentences. Google is faster, but Babelfish tends to give better translations. This is why in my translation tool I show both Google and Babelfish values retrieved. You can check out the code for this in the West Wind West Wind Web Toolkit's TranslationService.cs file which contains both the Google and Babelfish translation code pieces. Ironically the Babelfish code has been working forever using screen scraping and continues to work just fine today. I think it's a good idea to have multiple translation providers in case one is down or changes its format, hence the dual display in my translation form above. I hope this has been helpful to some of you - I've actually had many small uses for this code in a number of applications and it's sweet to have a simple routine that performs these operations for me easily. Resources Live Localization Sample Localization Resource Provider Administration form that includes options to translate text using Google and Babelfish interactively. TranslationService.cs The full source code in the West Wind West Wind Web Toolkit's Globalization library that contains the translation code. © Rick Strahl, West Wind Technologies, 2005-2011Posted in CSharp  HTTP   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • Custom links in RichTextBox

    - by IVlad
    Suppose I want every word starting with a # to generate an event on double click. For this I have implemented the following test code: private bool IsChannel(Point position, out int start, out int end) { if (richTextBox1.Text.Length == 0) { start = end = -1; return false; } int index = richTextBox1.GetCharIndexFromPosition(position); int stop = index; while (index >= 0 && richTextBox1.Text[index] != '#') { if (richTextBox1.Text[index] == ' ') { break; } --index; } if (index < 0 || richTextBox1.Text[index] != '#') { start = end = -1; return false; } while (stop < richTextBox1.Text.Length && richTextBox1.Text[stop] != ' ') { ++stop; } --stop; start = index; end = stop; return true; } private void richTextBox1_MouseMove(object sender, MouseEventArgs e) { textBox1.Text = richTextBox1.GetCharIndexFromPosition(new Point(e.X, e.Y)).ToString(); int d1, d2; if (IsChannel(new Point(e.X, e.Y), out d1, out d2) == true) { if (richTextBox1.Cursor != Cursors.Hand) { richTextBox1.Cursor = Cursors.Hand; } } else { richTextBox1.Cursor = Cursors.Arrow; } } This handles detecting words that start with # and making the mouse cursor a hand when it hovers over them. However, I have the following two problems: If I try to implement a double click event for richTextBox1, I can detect when a word is clicked, however that word is highlighted (selected), which I'd like to avoid. I can deselect it programmatically by selecting the end of the text, but that causes a flicker, which I would like to avoid. What ways are there to do this? The GetCharIndexFromPosition method returns the index of the character that is closest to the cursor. This means that if the only thing my RichTextBox contains is a word starting with a # then the cursor will be a hand no matter where on the rich text control it is. How can I make it so that it is only a hand when it hovers over an actual word or character that is part of a word I'm interested in? The implemented URL detection also partially suffers from this problem. If I enable detection of URLs and only write www.test.com in the rich text editor, the cursor will be a hand as long as it is on or below the link. It will not be a hand if it's to the right of the link however. I'm fine even with this functionality if making the cursor a hand if and only if it's on the text proves to be too difficult. I'm guessing I'll have to resort to some sort of Windows API calls, but I don't really know where to start. I am using Visual Studio 2008 and I would like to implement this myself. Update: The flickering problem would be solved if I could make it so that no text is selectable through double clicking, only through dragging the mouse cursor. Is this easier to achieve? Because I don't really care if one can select text or not by double clicking in this case.

    Read the article

  • <optgroup> Not working in jQuery Dropdown

    - by Santhosh Kumar
    I have a asp:dropdownlist which i have changed to jQuery multiselect. I have to group the data inside the dropdown. I am grouping this in runtime.If it is a normal asp dropdown its working. When applying jquery Multiselect its dosen't. Source: <link rel="stylesheet" type="text/css" href="Styles/jquery.multiselect.css" /> <link rel="stylesheet" type="text/css" href="Styles/jquery.multiselect.filter.css" /> <link rel="stylesheet" type="text/css" href="Styles/style.css" /> <link rel="stylesheet" type="text/css" href="Styles/prettify.css" /> <%--<script src="Scripts/jquery-1.4.1.min.js" type="text/javascript"></script>--%> <script src="Scripts/jquery-1.4.1.js" type="text/javascript"></script> <link rel="stylesheet" type="text/css" href="http://ajax.googleapis.com/ajax/libs/jqueryui/1/themes/ui-lightness/jquery-ui.css" /> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1/jquery-ui.min.js"></script> <script type="text/javascript" src="Scripts/jquery.multiselect.js"></script> <script type="text/javascript" src="Scripts/jquery.multiselect.filter.js"></script> <script type="text/javascript" src="Scripts/prettify.js"></script> <script type="text/javascript"> $(document).ready(function () { //Create groups for dropdown list $("option[classification='LessThanFive']").wrapAll("<optgroup label='Less Than Five' />"); $("option[classification='GreaterThanFive']").wrapAll("<optgroup label='Greater Than five' />"); }); </script> <asp:DropDownList ID="MobileData" runat="server" OnDataBound="ddl_DataBound"> </asp:DropDownList> //Code Behind: protected void ddl_DataBound(object sender, EventArgs e) { foreach (ListItem item in ((DropDownList)sender).Items) { if (System.Int32.Parse(item.Value) < 2) item.Attributes.Add("classification", "LessThanFive"); else item.Attributes.Add("classification", "GreaterThanFive"); } } protected void Page_Load(object sender, EventArgs e) { ListItemCollection list = new ListItemCollection(); list.Add(new ListItem("1", "1")); list.Add(new ListItem("2", "2")); list.Add(new ListItem("3", "3")); list.Add(new ListItem("4", "4")); list.Add(new ListItem("5", "5")); list.Add(new ListItem("6", "6")); list.Add(new ListItem("7", "7")); list.Add(new ListItem("8", "8")); list.Add(new ListItem("9", "9")); list.Add(new ListItem("10", "10")); MobileData.DataSource = list; MobileData.DataBind(); } Where i'm wrong?

    Read the article

  • How can I get `find` to ignore .svn directories?

    - by John Kugelman
    I often use the find command to search through source code, delete files, whatever. Annoyingly, because Subversion stores duplicates of each file in its .svn/text-base/ directories my simple searches end up getting lots of duplicate results. For example, I want to recursively search for uint in multiple messages.h and messages.cpp files: # find -name 'messages.*' -exec grep -Iw uint {} + ./messages.cpp: Log::verbose << "Discarding out of date message: id " << uint(olderMessage.id) ./messages.cpp: Log::verbose << "Added to send queue: " << *message << ": id " << uint(preparedMessage->id) ./messages.cpp: Log::error << "Received message with invalid SHA-1 hash: id " << uint(incomingMessage.id) ./messages.cpp: Log::verbose << "Received " << *message << ": id " << uint(incomingMessage.id) ./messages.cpp: Log::verbose << "Sent message: id " << uint(preparedMessage->id) ./messages.cpp: Log::verbose << "Discarding unsent message: id " << uint(preparedMessage->id) ./messages.cpp: for (uint i = 0; i < 10 && !_stopThreads; ++i) { ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Discarding out of date message: id " << uint(olderMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Added to send queue: " << *message << ": id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: Log::error << "Received message with invalid SHA-1 hash: id " << uint(incomingMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Received " << *message << ": id " << uint(incomingMessage.id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Sent message: id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: Log::verbose << "Discarding unsent message: id " << uint(preparedMessage->id) ./.svn/text-base/messages.cpp.svn-base: for (uint i = 0; i < 10 && !_stopThreads; ++i) { ./virus/messages.cpp:void VsMessageProcessor::_progress(const string &fileName, uint scanCount) ./virus/messages.cpp:ProgressMessage::ProgressMessage(const string &fileName, uint scanCount) ./virus/messages.h: void _progress(const std::string &fileName, uint scanCount); ./virus/messages.h: ProgressMessage(const std::string &fileName, uint scanCount); ./virus/messages.h: uint _scanCount; ./virus/.svn/text-base/messages.cpp.svn-base:void VsMessageProcessor::_progress(const string &fileName, uint scanCount) ./virus/.svn/text-base/messages.cpp.svn-base:ProgressMessage::ProgressMessage(const string &fileName, uint scanCount) ./virus/.svn/text-base/messages.h.svn-base: void _progress(const std::string &fileName, uint scanCount); ./virus/.svn/text-base/messages.h.svn-base: ProgressMessage(const std::string &fileName, uint scanCount); ./virus/.svn/text-base/messages.h.svn-base: uint _scanCount; How can I tell find to ignore the .svn directories?

    Read the article

  • CSS height doesn't work perfectly in firefox

    - by Shashank
    div.aboutstandard ul.aboutlist li { float: left; padding-left: 10px; padding-right: 10px; width: 300px; height: 280px; } div.aboutstandard ul.aboutlist li { float: left; padding-left: 10px; padding-right: 10px; width: 300px; height: 280px; } When i am setting this dimensions for the height, it works perfectly in chrome and internet explorer, but in firefox it takes different dimensions. the text goes under the next image in firefox. php code: <ul class="aboutlist1"> <li> <img class="aboutimg" src="images1.jpg" alt="<? $lang->text('ABOUT1'); ?>"/> <h1><? $lang->text('ABOUT1'); ?></h1> <p class="abouttext"><? $lang->text('ABOUT_TEXT1'); ?></p> </li> <li> <img class="aboutimg" src="images2.jpg" alt="<? $lang->text('ABOUT2'); ?>"/> <h1><? $lang->text('ABOUT2'); ?></h1> <p class="abouttext"><? $lang->text('ABOUT_TEXT2'); ?></p> </li> </ul> <ul class="aboutlist"> <li> <img class="aboutimg" src="image3.jpg" alt="<? $lang->text('ABOUT3'); ?>"/> <h1><? $lang->text('ABOUT3'); ?></h1> <p class="abouttext"><? $lang->text('ABOUT_TEXT3'); ?></p> </li> <li> <img class="aboutimg" src="image4.jpg" alt="<? $lang->text('ABOUT4'); ?>"/> <h1><? $lang->text('ABOUT4'); ?></h1> <p class="abouttext"><? $lang->text('ABOUT_TEXT4'); ?></p> </li> </ul>

    Read the article

  • SharePoint OCR image files indexing

    Introduction This article describes how to setup indexing of the image files (including TIFF, PDF, JPEG, BMP...) using OCR technology. The indexing described below utilizes Microsoft IFilter technology and as such is not specific to SharePoint, but can be used with any product that uses Microsoft indexing: Microsoft Search, Desktop search, SQL Server search, and through the plug-ins with Google desktop search. I however use it with Microsoft Windows SharePoint Services 2003. For those other products, the registration may need to be slightly different. Background  One of the projects I was working on required a storage of old documents scanned into PDF files. Then there was a separate team of people responsible for providing a tags for a search engine so those image documents could be found. The whole process was clumsy, labor intensive, and error prone. That was what started me on my exploration path. OCR The first search I fired was for the Open Source OCR products. Pretty quickly, I narrowed it down to TESSERACT (http://code.google.com/p/tesseract-ocr/). Tesseract is an orphaned brain child of HP that worked on it from 1985 to 1995. Then it was moved to the Open Source, and now if I understand it correctly, Google is working on it. With credentials like that, it's no wonder that Tesseract scores one of the highest marks on OCR recognition and accuracy. After downloading and struggling just a bit, I got Tesseract to work. The struggling part was that the home page claims that its base input format is a TIFF file. May be my TIFFs were bad, but I was able to get it to work only for BMP files. Image files conversion So now that I have an OCR that can convert BMP files into text, how do I get text out of the image PDF files? One more search, and I settled down on ImageMagic (http://www.imagemagick.org/). This is another wonderful Open Source utility that can convert any file into image. It did work out of the box, converting any TIFF files into bitmaps, but to get PDF files converted, it requires a GhostScript (http://mirror.cs.wisc.edu/pub/mirrors/ghost/GPL/gs864/gs864w32.exe). Dealing with text PDFs With that utility installed, I was cooking - I can convert any file (in particular PDF and TIFF) into bitmap, and then I can extract the text out of the bitmap. The only consideration was to somehow treat PDF files containing text differently - after all, OCR is very computation intensive and somewhat error prone even with perfect image quality and resolution. So another quick search, and I have a PDFTOTEXT (ftp://ftp.foolabs.com/pub/xpdf/xpdf-3.02pl4-win32.zip) - thank God for Open Source! With these guys, I can pull text out of PDF in an eye blink. However, I would get nothing for pure image PDFs, but I already have a solution for that! Batch process It took another 15 minutes to setup a batch script to automate the process: Check the file extension If file is a PDF file try to extract text out of it if there is more than certain amount of text in the file - done! if there is no text, convert first page into bitmap run OCR on the bitmap For any other file type, convert file into bitmap Run OCR on the bitmap Once you unzip the attached project, check out the bin\OCR.BAT file. It will create a temporary file in the directory where your source file is with the same name + the '.txt' extension.Continue span.fullpost {display:none;}

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Evaluating Oracle Data Mining Has Never Been Easier - Evaluation "Kit" Available

    - by chberger
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Now you can quickly and easily get set up to starting using Oracle Data Mining for evaluation purposes. Just go to the Oracle Technology Network (OTN) and follow these simple steps. Oracle Data Mining Evaluation "Kit" Instructions Step 1: Download and Install the Oracle Database 11g Release 2 Anyone can download and install the Oracle Database for free for evaluation purposes. Read OTN web site for details. 11.2.0.1.0 DB is the minimum, 11.2.0.2 is better and naturally 11.2.0.3 is best if you are a current customer and on active support. Either 32-bit or 64-bit is fine. 4GB of RAM or more works fine for SQL Developer and the Oracle Data Miner GUI extension. Downloading the database and installing it should take just about an hour or so, depending on your network and computer. For more instructions on setting up Oracle Data Mining see: http://www.oracle.com/technetwork/database/options/odm/dataminerworkflow-168677.html When you install the Oracle Database, the Sample Examples data should also be installed e.g.:Release 2 Examples win32_11gR2_examples.zip (565,154,740 bytes). Contains examples of how to use the Oracle Database. Download if you are new to Oracle and want to try some of the examples presented in the Documentation Step 2: Install SQL Developer 3.1 (the Oracle Data Mining Extension installs automatically) Step 3. Follow the four free step-by-step Oracle-by-Examples e-training lessons: Setting Up Oracle Data Miner 11g Release 2 This tutorial covers the process of setting up Oracle Data Miner 11g Release 2 for use within Oracle SQL Developer 3.0. Using Oracle Data Miner 11g Release 2 This tutorial covers the use of Oracle Data Miner to perform data mining against Oracle Database 11g Release 2. In this lesson, you examine and solve a data mining business problem by using the Oracle Data Miner graphical user interface (GUI). Star Schema Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform star schema mining against Oracle Database 11g Release 2. Text Mining Using Oracle Data Miner This tutorial covers the use of Oracle Data Miner to perform text mining against Oracle Database 11g Release 2. That’s it! Easy, fun and the fastest way to get started evaluating Oracle Data Mining. Enjoy! Charlie

    Read the article

  • SQL Azure: Notes on Building a Shard Technology

    - by Herve Roggero
    In Chapter 10 of the book on SQL Azure (http://www.apress.com/book/view/9781430229612) I am co-authoring, I am digging deeper in what it takes to write a Shard. It's actually a pretty cool exercise, and I wanted to share some thoughts on how I am designing the technology. A Shard is a technology that spreads the load of database requests over multiple databases, as transparently as possible. The type of shard I am building is called a Vertical Partition Shard  (VPS). A VPS is a mechanism by which the data is stored in one or more databases behind the scenes, but your code has no idea at design time which data is in which database. It's like having a mini cloud for records instead of services. Imagine you have three SQL Azure databases that have the same schema (DB1, DB2 and DB3), you would like to issue a SELECT * FROM Users on all three databases, concatenate the results into a single resultset, and order by last name. Imagine you want to ensure your code doesn't need to change if you add a new database to the shard (DB4). Now imagine that you want to make sure all three databases are queried at the same time, in a multi-threaded manner so your code doesn't have to wait for three database calls sequentially. Then, imagine you would like to obtain a breadcrumb (in the form of a new, virtual column) that gives you a hint as to which database a record came from, so that you could update it if needed. Now imagine all that is done through the standard SqlClient library... and you have the Shard I am currently building. Here are some lessons learned and techniques I am using with this shard: Parellel Processing: Querying databases in parallel is not too hard using the Task Parallel Library; all you need is to lock your resources when needed Deleting/Updating Data: That's not too bad either as long as you have a breadcrumb. However it becomes more difficult if you need to update a single record and you don't know in which database it is. Inserting Data: I am using a round-robin approach in which each new insert request is directed to the next database in the shard. Not sure how to deal with Bulk Loads just yet... Shard Databases:  I use a static collection of SqlConnection objects which needs to be loaded once; from there on all the Shard commands use this collection Extension Methods: In order to make it look like the Shard commands are part of the SqlClient class I use extension methods. For example I added ExecuteShardQuery and ExecuteShardNonQuery methods to SqlClient. Exceptions: Capturing exceptions in a multi-threaded code is interesting... but I kept it simple for now. I am using the ConcurrentQueue to store my exceptions. Database GUID: Every database in the shard is given a GUID, which is calculated based on the connection string's values. DataTable. The Shard methods return a DataTable object which can be bound to objects.  I will be sharing the code soon as an open-source project in CodePlex. Please stay tuned on twitter to know when it will be available (@hroggero). Or check www.bluesyntax.net for updates on the shard. Thanks!

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >