Search Results

Search found 17526 results on 702 pages for 'dynamic methods'.

Page 692/702 | < Previous Page | 688 689 690 691 692 693 694 695 696 697 698 699  | Next Page >

  • how to cout a vector of structs (that's a class member, using extraction operator)

    - by Julz
    hi, i'm trying to simply cout the elements of a vector using an overloaded extraction operator. the vector contians Point, which is just a struct containing two doubles. the vector is a private member of a class called Polygon, so heres my Point.h #ifndef POINT_H #define POINT_H #include <iostream> #include <string> #include <sstream> struct Point { double x; double y; //constructor Point() { x = 0.0; y = 0.0; } friend std::istream& operator >>(std::istream& stream, Point &p) { stream >> std::ws; stream >> p.x; stream >> p.y; return stream; } friend std::ostream& operator << (std::ostream& stream, Point &p) { stream << p.x << p.y; return stream; } }; #endif my Polygon.h #ifndef POLYGON_H #define POLYGON_H #include "Segment.h" #include <vector> class Polygon { //insertion operator needs work friend std::istream & operator >> (std::istream &inStream, Polygon &vertStr); // extraction operator friend std::ostream & operator << (std::ostream &outStream, const Polygon &vertStr); public: //Constructor Polygon(const std::vector<Point> &theVerts); //Default Constructor Polygon(); //Copy Constructor Polygon(const Polygon &polyCopy); //Accessor/Modifier methods inline std::vector<Point> getVector() const {return vertices;} //Return number of Vector elements inline int sizeOfVect() const {return vertices.size();} //add Point elements to vector inline void setVertices(const Point &theVerts){vertices.push_back (theVerts);} private: std::vector<Point> vertices; }; and Polygon.cc using namespace std; #include "Polygon.h" // Constructor Polygon::Polygon(const vector<Point> &theVerts) { vertices = theVerts; } //Default Constructor Polygon::Polygon(){} istream & operator >> (istream &inStream, Polygon::Polygon &vertStr) { inStream >> ws; inStream >> vertStr; return inStream; } // extraction operator ostream & operator << (ostream &outStream, const Polygon::Polygon &vertStr) { outStream << vertStr.vertices << endl; return outStream; } i figure my Point insertion/extraction is right, i can insert and cout using it and i figure i should be able to just...... cout << myPoly[i] << endl; in my driver? (in a loop) or even... cout << myPoly[0] << endl; without a loop? i've tried all sorts of myPoly.at[i]; myPoly.vertices[i]; etc etc also tried all veriations in my extraction function outStream << vertStr.vertices[i] << endl; within loops, etc etc. when i just create a... vector<Point> myVect; in my driver i can just... cout << myVect.at(i) << endl; no problems. tried to find an answer for days, really lost and not through lack of trying!!! thanks in advance for any help. please excuse my lack of comments and formatting also there's bits and pieces missing but i really just need an answer to this problem thanks again

    Read the article

  • how to count checked checkboxes in different divs

    - by KMKMAHESH
    <head><title>STUDENT WISE EXAM BACKLOGS DISPLAY FOR EXAM REGISTRATION</title> <style type="text/css"> th { font-family:Arial; color:black; border:1px solid #000; } thead { display:table-header-group; } tbody { display:table-row-group; } td { border:1px solid #000; } </style> <script type="text/javascript" > function check_value(year,sem){ ysem="ys"+year+sem; var reg=document.registration.regulation.value; subjectsys="subjects"+year+sem; amountsys="amount"+year+sem; if(year==1){ if(sem==1){ var value_list = document.getElementById("ys11").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys12").getElementsByTagName('input'); } }elseif(year==2){ if(sem==1){ var value_list = document.getElementById("ys21").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys22").getElementsByTagName('input'); } }elseif(year==3){ if(sem==1){ var value_list = document.getElementById("ys31").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys32").getElementsByTagName('input'); } }elseif(year==4){ if(sem==1){ var value_list = document.getElementById("ys41").getElementsByTagName('input'); } if(sem==2){ var value_list = document.getElementById("ys42").getElementsByTagName('input'); } } values = 0; for (var i=0; i<value_list.length; i++){ if (value_list[i].checked) { values=values+1; } } document.getElementById(subjectsys).value=values; if (values=="0") { document.getElementById(amountsys).innerHTML=""; return; } if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById(amountsys).innerHTML=xmlhttp.responseText; } } xmlhttp.open("GET","fee.php?year="+year+"&reg="+reg+"&sem="+sem+"&sub="+values,true); xmlhttp.send(); } </script> </head> <form id="registration" name="registration" action=subverify.php method=POST></br></br> <center> Backlog Subjects for <b>08KN1A1219</b> </br></br> <table border='1'><tr> <th width='40'>&nbsp;</th><th width='90'>Regulation</th><th width='40'>Year</th> <th width='40'>Sem</th><th width='350'>Subname</th> <th width='70'>Internals</th><th width='70'>Externals</th> </tr><div id="ys41"><tr> <td width='40'><center><input type="checkbox" name="sub[]" value="344" onclick="check_value(4,1)"></center></td> <td width='90'><center>R07</center></td><td width='40'><center>4</center></td><td width='40'><center>1</center></td> <td width='350'>EMBEDDED SYSTEMS</td><td width='70'><center>18</center></td> <td width='70'><center>17</center></td></tr><tr><td colspan=5 align=right><b>Subjects: </b><input size=2 type=textbox id=subjects41 name=subjects41 value=0 maxlength=2 readonly=readonly></td> <td align=right><b>Amount :</b></td> <input type='hidden' name='regulation' id=regulationsubjects41 value='R07'> <td><div id="amount41"><input type="textbox" name="amountval41" value="0" size="5" maxlength="5" readonly="readonly"></div></td></tr></div><div id="ys42"><tr> <td width='40'><center><input type="checkbox" name="sub[]" value="527" onclick="check_value(4,2)"></center></td> <td width='90'><center>R07</center></td><td width='40'><center>4</center></td><td width='40'><center>2</center></td> <td width='350'>DESIGN PATTERNS</td><td width='70'><center>12</center></td> <td width='70'><center>14</center></td></tr><tr><td colspan=5 align=right><b>Subjects: </b><input size=2 type=textbox id=subjects42 name=subjects42 value=0 maxlength=2 readonly=readonly></td> <td align=right><b>Amount :</b></td> <input type='hidden' name='regulation' id=regulationsubjects42 value='R07'> <td><div id="amount42"><input type="textbox" name="amountval42" value="0" size="5" maxlength="5" readonly="readonly"></div></td></tr></div><tr><td colspan=7><center><b><div id="maintotal"><input type="textbox" name="maintotal" value="0" size="5" maxlength="5" readonly="readonly"></div></center></b></td></tr><tr></tr></table></br></br> <center><input type='hidden' name='htno' value='08KN1A1219'> <input type='submit' value='Register'></center></form></br> this is a output of a php file with using dynamic data in the form i want to count only the checkboxes in the div and it has to display in that subjectsdiv like subjects41 and subjects42 can any one please help me to update this javascript it passes some ajax request for displaying the fee

    Read the article

  • PHP: How to automate building a 100 <UL>/<LI> menuitems, while keeping the Menu Structure File Flat / Simply Managable?

    - by Sam
    Above: current "stupid" menu. (entire ul/li menu for javascript menu system) + (some li lines as page-specific submenu) Hi folks! With passion for automation and elegancy, but limited knowledge/knowhow, im stuck with "my hands in my hair" as we Dutch say, for my current menu system works perfectly, but is a pain in the a*s to update! So, i would appreciate it greatly, if you can suggest how to automate this in php: how to let the php generate the html menu code basing on a flat menu input file with TABS indented. OLD SITUATION <ul> <!-- about 100 of these <li>....</li> lines --> <li><a href="carrot.php"><p class="mnu" style="background-position:0 -820px"><? echo __("carrot juice") ?></p></a></li> <!-- lots of data, with only little bit thats really the menu itself--> </ul a javascript file reads a ul/li structure as input to build menu of format in that ul/li, the items with a hyperlink and sprite-bg position represent webpages, (inside LI) while items without hyperlink and sprite-bg are just headers of that menusection, (inside H6) to highlight the current page in the menu, the javascript menumaker uses an id number. this number corresponds to the consequtive li that is a webpage, skips h6 headers correctly. these h6 headers are only there for when importing sections of the same menu as submenu. non-li headers are not shown in menu, nore counted by the javascript menu for their ID. to know which page should be shown, i have to count from ID 0, the li items till finding the current webpage in the li structure and then manually put it in each webpage! BUT: changing an item in li order, means stupidly re-counting their entire li again! each webpage has an icon (= sprite bg-position numer), which is also used in the webpage. INTENDED RESULT I dream of, once setting what the current webpage is (e.g carrot.php) the menu system automatically "finds" and "counts" the li's and returns the id nr (for proper highlight of main menu); generates the entire menu html, and depending on which headings are set for submenu, (e.g. meals, drinks) generates those submenu (entire section below each given header); ginally adds h5 highlight inside the li of that submenu item. For the menu, i wish an easily readable, simple plain txt menu that is indented with tabs, (each tab is one depth for example) and further tabs follow for url and sprite position of icon. MY DREAM MENU-MANAGEMENT FILE |>TAB SEPARATED/INDENTED FLATMENU FILE |MUST BE CALCULATED BY PHP: |>MENUTEXT============URL=============SPRITE=====|ID===TAG================== |>about "#" -520 |00 li |> INFORMATION |—— h6 |> physical state "physical.php" -920 |01 li |> mental health "mental.php" -10 |02 li |> |>apetite "#" -1290 |03 li |> meals "#" -600 |04 li |> COLD MEAL |—— h6 |> egg salade "salad.php" -1040 |05 li |> salmon fish "salmon.php" -540 |06 li |> HOT MEAL |—— h6 |> spare ribs "spareribs.php" -120 |07 li |> di macaroni "macaroni.php" -870 |08 li |> |> drinks "#" -230 |09 li |> JUCY DRINK |—— h6 |> carrot juice "carrot.php" -820 |10 li |> mango hive "mango.php" -270 |11 li DESIRED CHRONOLOGY php outputs the entire ul/li html so the javascript can show the menu: webpage items go inside li tags, and header items go inside h6 tags, e.g. <h6>JUCY DRINK</h6> Each website page has a url filename [eg: salad.php]. Based on this given fact, the php menu generator detects the pagename, gives the IDnr of the position of that page according to the li-item nr and sets variable for javascript to highlight current menu item. the menu items below the specified headers are loaded as submenu in which the current page.php is wrapped inside h5 to highlight current page in submenu: e.g. (<li><h5><a href="carrot.php"><p>..etc..</p></h5></li> Question Which methods / steps / (chronological)ways are there for doing this? I am no good in php programming, but am learning it so please dont write any code without a line of comment why I should use that method etc. Where do I start? If I am unclear in my question, please ask. Thanks. Much appreciated!! Concrete Task List from the provided Comments/Answers, sofar: (RobertB) First, get some PHP code working that can read through a tab-delimited file and put the data into an appropriate data structure. NOW WORKING AT THIS

    Read the article

  • Need some suggestions on my softwares architecture. [Code review]

    - by Sergio Tapia
    I'm making an open source C# library for other developers to use. My key concern is ease of use. This means using intuitive names, intuitive method usage and such. This is the first time I've done something with other people in mind, so I'm really concerned about the quality of the architecture. Plus, I wouldn't mind learning a thing or two. :) I have three classes: Downloader, Parser and Movie I was thinking that it would be best to only expose the Movie class of my library and have Downloader and Parser remain hidden from invocation. Ultimately, I see my library being used like this. using FreeIMDB; public void Test() { var MyMovie = Movie.FindMovie("The Matrix"); //Now MyMovie would have all it's fields set and ready for the big show. } Can you review how I'm planning this, and point out any wrong judgement calls I've made and where I could improve. Remember, my main concern is ease of use. Movie.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { public class Movie { public Image Poster { get; set; } public string Title { get; set; } public DateTime ReleaseDate { get; set; } public string Rating { get; set; } public string Director { get; set; } public List<string> Writers { get; set; } public List<string> Genres { get; set; } public string Tagline { get; set; } public string Plot { get; set; } public List<string> Cast { get; set; } public string Runtime { get; set; } public string Country { get; set; } public string Language { get; set; } public Movie FindMovie(string Title) { Movie film = new Movie(); Parser parser = Parser.FromMovieTitle(Title); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } public Movie FindKnownMovie(string ID) { Movie film = new Movie(); Parser parser = Parser.FromMovieID(ID); film.Poster = parser.Poster(); film.Title = parser.Title(); film.ReleaseDate = parser.ReleaseDate(); //And so an so forth. } } } Parser.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using HtmlAgilityPack; namespace FreeIMDB { /// <summary> /// Provides a simple, and intuitive way for searching for movies and actors on IMDB. /// </summary> class Parser { private Downloader downloader = new Downloader(); private HtmlDocument Page; #region "Page Loader Events" private Parser() { } public static Parser FromMovieTitle(string MovieTitle) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindMovie(MovieTitle); return newParser; } public static Parser FromActorName(string ActorName) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindActor(ActorName); return newParser; } public static Parser FromMovieID(string MovieID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownMovie(MovieID); return newParser; } public static Parser FromActorID(string ActorID) { var newParser = new Parser(); newParser.Page = newParser.downloader.FindKnownActor(ActorID); return newParser; } #endregion #region "Page Parsing Methods" public string Poster() { //Logic to scrape the Poster URL from the Page element of this. return null; } public string Title() { return null; } public DateTime ReleaseDate() { return null; } #endregion } } ----------------------------------------------- Do you guys think I'm heading towards a good path, or am I setting myself up for a world of hurt later on? My original thought was to separate the downloading, the parsing and the actual populating to easily have an extensible library. Imagine if one day the website changed its HTML, I would then only have to modifiy the parsing class without touching the Downloader.cs or Movie.cs class. Thanks for reading and for helping!

    Read the article

  • HTML + javascript mouse over, mouseout, onclick not working in firefox.

    - by lucky
    Hello Everyone, My question is to get onMouseover,onMouseout,onMousedown,onClick on a table row. For which i am calling javascript userdefined functions. onMouseover --- Background color should change. onMouseout --- Reset to original color onClick --- First column checkbox/radio button should be set and background color should change onMousedown --- background color should change. My code in html is:- <tr onMouseOver="hover(this)" onMouseOut="hover_out(this)" onMouseDown="get_first_state(this)" onClick="checkit(this)" > and the methods in javascripts are:- var first_state = false; var oldcol = '#ffffff'; var oldcol_cellarray = new Array(); function hover(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { oldcol_cellarray[i] = element.cells[i].style.backgroundColor; element.cells[i].style.backgroundColor='#e6f6f6'; } } } } // ----------------------------------------------------------------------------------------------- function hover_out(element) { if (! element) element = this; while (element.tagName != 'TR') { element = element.parentNode; } if (element.style.fontWeight != 'bold') { for (var i = 0; i<element.cells.length; i++) { if (element.cells[i].className != "no_hover") { if (typeof oldcol_cellarray != undefined) { element.cells[i].style.backgroundColor=oldcol_cellarray[i]; } else { element.cells[i].style.backgroundColor='#ffffff'; } //var oldcol_cellarray = new Array(); } } } } // ----------------------------------------------------------------------------------------------- function get_first_state(element) { while (element.tagName != 'TR') { element = element.parentNode; } first_state = element.cells[0].firstChild.checked; } // ----------------------------------------------------------------------------------------------- function checkit (element) { while (element.tagName != 'TR') { element = element.parentNode; } if (element.cells[0].firstChild.type == 'radio') { var typ = 0; } else if (element.cells[0].firstChild.type == 'checkbox') { typ = 1; } if (element.cells[0].firstChild.checked == true && typ == 1) { if (element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = false; } set_rowstyle(element, element.cells[0].firstChild.checked); } else { if (typ == 0 || element.cells[0].firstChild.checked == first_state) { element.cells[0].firstChild.checked = true; } set_rowstyle(element, element.cells[0].firstChild.checked); } if (typ == 0) { var table = element.parentNode; if (table.tagName != "TABLE") { table = table.parentNode; } if (table.tagName == "TABLE") { table=table.tBodies[0].rows; //var table = document.getElementById("js_tb").tBodies[0].rows; for (var i = 1; i< table.length; i++) { if (table[i].cells[0].firstChild.checked == true && table[i] != element) { table[i].cells[0].firstChild.checked = false; } if (table[i].cells[0].firstChild.checked == false) { set_rowstyle(table[i], false); } } } } } function set_rowstyle(r, on) { if (on == true) { for (var i =0; i < r.cells.length; i++) { r.style.fontWeight = 'bold'; r.cells[i].style.backgroundColor = '#f2f2c2'; } } else { for ( i =0; i < r.cells.length; i++) { r.style.fontWeight = 'normal'; r.cells[i].style.backgroundColor = '#ffffff'; } } } It is working as expected in IE. But coming to firefox i am surprised on seeing the output after so much of coding. In Firefox:-- onMouseOver is working as expected. color change of that particular row. onClick -- Setting the background color permenantly..eventhough i do onmouseover on different rows. the clicked previous row color is not reset to white. -- not as expected onclick on 2 rows..the background of both the rows is set..Only the latest row color should be set. other rows that are selected before should be set back..not as expected i.e if i click on all the rows..background color of everything is changed... Eventhough i click on the row. First column i.e radio button or checkbox is not set.. Please help me to solve this issue in firefox. Do let me know where my code needs to be changed... Thanks in advance!!

    Read the article

  • How to detect crashing tabed webbrowser and handle it?

    - by David Eaton
    I have a desktop application (forms) with a tab control, I assign a tab and a new custom webrowser control. I open up about 10 of these tabs. Each one visits about 100 - 500 different pages. The trouble is that if 1 webbrowser control has a problem it shuts down the entire program. I want to be able to close the offending webbrowser control and open a new one in it's place. Is there any event that I need to subscribe to catch a crashing or unresponsive webbrowser control ? I am using C# on windows 7 (Forms), .NET framework v4 =============================================================== UPDATE: 1 - The Tabbed WebBrowser Example Here is the code I have and How I use the webbrowser control in the most basic way. Create a new forms project and name it SimpleWeb Add a new class and name it myWeb.cs, here is the code to use. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows.Forms; using System.Security.Policy; namespace SimpleWeb { //inhert all of webbrowser class myWeb : WebBrowser { public myWeb() { //no javascript errors this.ScriptErrorsSuppressed = true; //Something we want set? AssignEvents(); } //keep near the top private void AssignEvents() { //assign WebBrowser events to our custom methods Navigated += myWeb_Navigated; DocumentCompleted += myWeb_DocumentCompleted; Navigating += myWeb_Navigating; NewWindow += myWeb_NewWindow; } #region Events //List of events:http://msdn.microsoft.com/en-us/library/system.windows.forms.webbrowser_events%28v=vs.100%29.aspx //Fired when a new windows opens private void myWeb_NewWindow(object sender, System.ComponentModel.CancelEventArgs e) { //cancel all popup windows e.Cancel = true; //beep to let you know canceled new window Console.Beep(9000, 200); } //Fired before page is navigated (not sure if its before or during?) private void myWeb_Navigating(object sender, System.Windows.Forms.WebBrowserNavigatingEventArgs args) { } //Fired after page is navigated (but not loaded) private void myWeb_Navigated(object sender, System.Windows.Forms.WebBrowserNavigatedEventArgs args) { } //Fired after page is loaded (Catch 22 - Iframes could be considered a page, can fire more than once. Ads are good examples) private void myWeb_DocumentCompleted(System.Object sender, System.Windows.Forms.WebBrowserDocumentCompletedEventArgs args) { } #endregion //Answer supplied by mo. (modified)? public void OpenUrl(string url) { try { //this.OpenUrl(url); this.Navigate(url); } catch (Exception ex) { MessageBox.Show("Your App Crashed! Because = " + ex.ToString()); //MyApplication.HandleException(ex); } } //Keep near the bottom private void RemoveEvents() { //Remove Events Navigated -= myWeb_Navigated; DocumentCompleted -= myWeb_DocumentCompleted; Navigating -= myWeb_Navigating; NewWindow -= myWeb_NewWindow; } } } On Form1 drag a standard tabControl and set the dock to fill, you can go into the tab collection and delete the pre-populated tabs if you like. Right Click on Form1 and Select "View Code" and replace it with this code. using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using mshtml; namespace SimpleWeb { public partial class Form1 : Form { public Form1() { InitializeComponent(); //Load Up 10 Tabs for (int i = 0; i <= 10; i++) { newTab("Test_" + i, "http://wwww.yahoo.com"); } } private void newTab(string Title, String Url) { //Create a new Tab TabPage newTab = new TabPage(); newTab.Name = Title; newTab.Text = Title; //create webbrowser Instance myWeb newWeb = new myWeb(); //Add webbrowser to new tab newTab.Controls.Add(newWeb); newWeb.Dock = DockStyle.Fill; //Add New Tab to Tab Pages tabControl1.TabPages.Add(newTab); newWeb.OpenUrl(Url); } } } Save and Run the project. Using the answer below by mo. , you can surf the first url with no problem, but what about all the urls the user clicks on? How do we check those? I prefer not to add events to every single html element on a page, there has to be a way to run the new urls thru the function OpenUrl before it navigates without having an endless loop. Thanks.

    Read the article

  • when text changed inputbox automatically updates next text boxes?

    - by James123
    Extension to my previous question http://bit.ly/c5yiVM I have 7 textboxes. If Top 1 textbox(Volume All Years) text changed, text need to be updated in next 6 inputboxes(Latest 2009 Volume to Latest 2014 Volume). I need javascript or Jquery for this. I will write Js textchanged() or focuschange() for top 1 textbox. So what should I write in focuschage() or textchanged methods(). I changed little bit in code. Now Html will like below. These textboxes have common CssClass. volumetextbox. Can we use this class and change text in all textboxes those have same CssClass. <tr id="row12_136" class="RegText"> <td style="width:420px;Padding-right:20px;">Volume All Years</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_136" tabindex="61" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_60" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2009 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_60" tabindex="56" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_61" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2010 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_61" tabindex="57" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_62" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2011 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_62" tabindex="58" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_63" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2012 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_63" tabindex="59" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_64" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2013 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_64" tabindex="60" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td> <tr id="row12_65" class="RegText"> <td style="width:420px;Padding-right:20px;">Latest 2014 Volume</td> <td style="width:420px;Padding-left:0px;"> <input name="12_136" type="text" maxlength="255" id="12_65" tabindex="61" title="Volume All Years" class="volumetextbox" OnKeyPress="javascript:FocusChange();" style="width:300px;" /> </td>

    Read the article

  • Change English numbers to Persian and vice versa in MVC (httpmodule)?

    - by Mohammad
    I wanna change all English numbers to Persian for showing to users. and change them to English numbers again for giving all requests (Postbacks) e.g: we have something like this in view IRQ170, I wanna show IRQ??? to users and give IRQ170 from users. I know, I have to use Httpmodule, But I don't know how ? Could you please guide me? Edit : Let me describe more : I've written the following http module : using System; using System.Collections.Specialized; using System.Diagnostics; using System.IO; using System.Text; using System.Text.RegularExpressions; using System.Web; using System.Web.UI; using Smartiz.Common; namespace Smartiz.UI.Classes { public class PersianNumberModule : IHttpModule { private StreamWatcher _watcher; #region Implementation of IHttpModule /// <summary> /// Initializes a module and prepares it to handle requests. /// </summary> /// <param name="context">An <see cref="T:System.Web.HttpApplication"/> that provides access to the methods, properties, and events common to all application objects within an ASP.NET application </param> public void Init(HttpApplication context) { context.BeginRequest += ContextBeginRequest; context.EndRequest += ContextEndRequest; } /// <summary> /// Disposes of the resources (other than memory) used by the module that implements <see cref="T:System.Web.IHttpModule"/>. /// </summary> public void Dispose() { } #endregion private void ContextBeginRequest(object sender, EventArgs e) { HttpApplication context = sender as HttpApplication; if (context == null) return; _watcher = new StreamWatcher(context.Response.Filter); context.Response.Filter = _watcher; } private void ContextEndRequest(object sender, EventArgs e) { HttpApplication context = sender as HttpApplication; if (context == null) return; _watcher = new StreamWatcher(context.Response.Filter); context.Response.Filter = _watcher; } } public class StreamWatcher : Stream { private readonly Stream _stream; private readonly MemoryStream _memoryStream = new MemoryStream(); public StreamWatcher(Stream stream) { _stream = stream; } public override void Flush() { _stream.Flush(); } public override int Read(byte[] buffer, int offset, int count) { int bytesRead = _stream.Read(buffer, offset, count); string orgContent = Encoding.UTF8.GetString(buffer, offset, bytesRead); string newContent = orgContent.ToEnglishNumber(); int newByteCountLength = Encoding.UTF8.GetByteCount(newContent); Encoding.UTF8.GetBytes(newContent, 0, Encoding.UTF8.GetByteCount(newContent), buffer, 0); return newByteCountLength; } public override void Write(byte[] buffer, int offset, int count) { string strBuffer = Encoding.UTF8.GetString(buffer, offset, count); MatchCollection htmlAttributes = Regex.Matches(strBuffer, @"(\S+)=[""']?((?:.(?![""']?\s+(?:\S+)=|[>""']))+.)[""']?", RegexOptions.IgnoreCase | RegexOptions.Multiline); foreach (Match match in htmlAttributes) { strBuffer = strBuffer.Replace(match.Value, match.Value.ToEnglishNumber()); } MatchCollection scripts = Regex.Matches(strBuffer, "<script[^>]*>(.*?)</script>", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match match in scripts) { MatchCollection values = Regex.Matches(match.Value, @"([""'])(?:(?=(\\?))\2.)*?\1", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match stringValue in values) { strBuffer = strBuffer.Replace(stringValue.Value, stringValue.Value.ToEnglishNumber()); } } MatchCollection styles = Regex.Matches(strBuffer, "<style[^>]*>(.*?)</style>", RegexOptions.Singleline | RegexOptions.IgnoreCase | RegexOptions.Multiline | RegexOptions.IgnorePatternWhitespace); foreach (Match match in styles) { strBuffer = strBuffer.Replace(match.Value, match.Value.ToEnglishNumber()); } byte[] data = Encoding.UTF8.GetBytes(strBuffer); _memoryStream.Write(data, offset, count); _stream.Write(data, offset, count); } public override string ToString() { return Encoding.UTF8.GetString(_memoryStream.ToArray()); } #region Rest of the overrides public override bool CanRead { get { throw new NotImplementedException(); } } public override bool CanSeek { get { throw new NotImplementedException(); } } public override bool CanWrite { get { throw new NotImplementedException(); } } public override long Seek(long offset, SeekOrigin origin) { throw new NotImplementedException(); } public override void SetLength(long value) { throw new NotImplementedException(); } public override long Length { get { throw new NotImplementedException(); } } public override long Position { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } #endregion } } It works well, but It converts all numbers in css and scripts files to Persian and it causes error.

    Read the article

  • Need help displaying a table dynamically based on a drop down date selection

    - by Gideon
    I am designing a job rota planner for a company and need help displaying a dynamic table containing the staff details. I have the following tables in MySQL database: Staff, Event, and Job. The staff table holds staff details (staffed, name, address...etc), the Event table (eventide, eventName, Fromdate, Todate...etc) and the Job table holds (Jobid, Jobdate, Eventid(fk), Staffid (fk)). I need to dynamically display the available staff list from the staff table when the user selects the event and the date (3 drop downs: date, month, and year) from a PHP form. I need to display staff members that have not been assigned work on the selected date by checking the Jobdate in the Job table. I have been at this for all day and can't get around it. I am still learning PHP and would surely appreciate any help I can get. My current code displays all staff members when an event is selected: //Create the day pull-down menu. $days = range (1, 31); echo "<Select Name=day Value=''><Option>Day</option>"; foreach ($days as $value) { echo '<option value="'.$value.'">'.$value.'</option>\n'; } echo "</Select>"; echo "</td><td>"; //Create the month pull-down menu echo "<Select Name=month Value=''><Option>Month</option>"; echo "<option value='01'>Jan</option>"; echo "<option value='02'>Feb</option>"; echo "<option value='03'>Mar</option>"; echo "<option value='04'>Apr</option>"; echo "<option value='05'>May</option>"; echo "<option value='06'>Jun</option>"; echo "<option value='07'>Jul</option>"; echo "<option value='08'>Aug</option>"; echo "<option value='09'>Sep</option>"; echo "<option value='10'>Oct</option>"; echo "<option value='11'>Nov</option>"; echo "<option value='12'>Dec</option>"; echo "</select>"; echo "</td><td>"; //Create the year pull-down menu $currentYear = date ("Y"); $years = range ($currentYear, 2020); echo "<Select Name=year Value=''><Option>Year</option>"; foreach ($years as $value) { echo '<option value="'.$value.'">'.$value.'</option>\n'; } echo "</Select>"; echo "</td></tr></table>"; echo "</td><td>"; //echo "<img src='../ETMSimages/etms_staff.png'</img></td><td>"; //construct the available staff list $staffsql = "SELECT StaffId, LastName, FirstName FROM Staff order by StaffId"; $staffResult = mysql_query($staffsql); if ($staffResult){ echo "<p><table cellspacing='1' cellpadding='3'>"; echo "<th colspan=6>List of Available Staff</th>"; echo "</tr><tr><th> Select</th><th>Id</th><th></th><th>Last Name </th><th></th><th>First Name </th></tr>"; while ($staffarray = mysql_fetch_array($staffResult)) { echo "<tr onMouseOver= this.bgColor = 'red' onMouseOut =this.bgColor = 'white' bgcolor= '#FFFFFF'> <td align=center><input type='checkbox' name='selectbox[]' id='selectbox[]' value=".$staffarray['StaffId']."> </td><td align=left>".$staffarray['StaffId']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['LastName']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['FirstName']." </td></tr>"; } echo "</table>"; } else { echo "<br> The Staff list can not be displayed!"; } echo "</td></tr>"; echo "<tr><td></td>"; echo "<td align=center><input type='submit' name='Submit' value='Assign Staff'>&nbsp&nbsp"; echo "<input type='reset' value='Start Over'>"; echo "</td></tr>"; echo "</table>";

    Read the article

  • How do you overide a class that is called by another class with parent::method

    - by dan.codes
    I am trying to extend Mage_Catalog_Block_Product_View I have it setup in my local directory as its own module and everything works fine, I wasn't getting the results that I wanted. I then saw that another class extended that class as well. The method I am trying to override is the protected function _prepareLayout() This is the function class Mage_Review_Block_Product_View extends Mage_Catalog_Block_Product_View protected function _prepareLayout() { $this-&gt;getLayout()-&gt;createBlock('catalog/breadcrumbs'); $headBlock = $this-&gt;getLayout()-&gt;getBlock('head'); if ($headBlock) { $title = $this-&gt;getProduct()-&gt;getMetaTitle(); if ($title) { $headBlock-&gt;setTitle($title); } $keyword = $this-&gt;getProduct()-&gt;getMetaKeyword(); $currentCategory = Mage::registry('current_category'); if ($keyword) { $headBlock-&gt;setKeywords($keyword); } elseif($currentCategory) { $headBlock-&gt;setKeywords($this-&gt;getProduct()-&gt;getName()); } $description = $this-&gt;getProduct()-&gt;getMetaDescription(); if ($description) { $headBlock-&gt;setDescription( ($description) ); } else { $headBlock-&gt;setDescription( $this-&gt;getProduct()-&gt;getDescription() ); } } return parent::_prepareLayout(); } I am trying to modify it just a bit with the following, keep in mind I know there is a title prefix and suffix but I needed it only for the product page and also I needed to add text to the description. class MyCompany_Catalog_Block_Product_View extends Mage_Catalog_Block_Product_View protected function _prepareLayout() { $storeId = Mage::app()-&gt;getStore()-&gt;getId(); $this-&gt;getLayout()-&gt;createBlock('catalog/breadcrumbs'); $headBlock = $this-&gt;getLayout()-&gt;getBlock('head'); if ($headBlock) { $title = $this-&gt;getProduct()-&gt;getMetaTitle(); if ($title) { if($storeId == 2){ $title = "Pool Supplies Fast - " .$title; $headBlock-&gt;setTitle($title); } $headBlock-&gt;setTitle($title); }else{ $path = Mage::helper('catalog')-&gt;getBreadcrumbPath(); foreach ($path as $name =&gt; $breadcrumb) { $title[] = $breadcrumb['label']; } $newTitle = "Pool Supplies Fast - " . join($this-&gt;getTitleSeparator(), array_reverse($title)); $headBlock-&gt;setTitle($newTitle); } $keyword = $this-&gt;getProduct()-&gt;getMetaKeyword(); $currentCategory = Mage::registry('current_category'); if ($keyword) { $headBlock-&gt;setKeywords($keyword); } elseif($currentCategory) { $headBlock-&gt;setKeywords($this-&gt;getProduct()-&gt;getName()); } $description = $this-&gt;getProduct()-&gt;getMetaDescription(); if ($description) { if($storeId == 2){ $description = "Pool Supplies Fast - ". $this-&gt;getProduct()-&gt;getName() . " - " . $description; $headBlock-&gt;setDescription( ($description) ); }else{ $headBlock-&gt;setDescription( ($description) ); } } else { if($storeId == 2){ $description = "Pool Supplies Fast: ". $this-&gt;getProduct()-&gt;getName() . " - " . $this-&gt;getProduct()-&gt;getDescription(); $headBlock-&gt;setDescription( ($description) ); }else{ $headBlock-&gt;setDescription( $this-&gt;getProduct()-&gt;getDescription() ); } } } return Mage_Catalog_Block_Product_Abstract::_prepareLayout(); } This executs fine but then I notice that the following class Mage_Review_Block_Product_View_List extends which extends Mage_Review_Block_Product_View and that extends Mage_Catalog_Block_Product_View as well. Inside this class they call the _prepareLayout as well and call the parent with parent::_prepareLayout() class Mage_Review_Block_Product_View_List extends Mage_Review_Block_Product_View protected function _prepareLayout() { parent::_prepareLayout(); if ($toolbar = $this-&gt;getLayout()-&gt;getBlock('product_review_list.toolbar')) { $toolbar-&gt;setCollection($this-&gt;getReviewsCollection()); $this-&gt;setChild('toolbar', $toolbar); } return $this; } So obviously this just calls the same class I am extending and runs the function I am overiding but it doesn't get to my class because it is not in my class hierarchy and since it gets called after my class all the stuff in the parent class override what I have set. I'm not sure about the best way to extend this type of class and method, there has to be a good way to do this, I keep finding I am running into issues when trying to overide these prepare methods that are derived from the abstract classes, there seems to be so many classes overriding them and calling parent::method. What is the best way to override these functions?

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • Many-to-one relation exception due to closed session after loading

    - by Nick Thissen
    Hi, I am using NHibernate (version 1.2.1) for the first time so I wrote a simple test application (an ASP.NET project) that uses it. In my database I have two tables: Persons and Categories. Each person gets one category, seems easy enough. | Persons | | Categories | |--------------| |--------------| | Id (PK) | | Id (PK) | | Firstname | | CategoryName | | Lastname | | CreatedTime | | CategoryId | | UpdatedTime | | CreatedTime | | Deleted | | UpdatedTime | | Deleted | The Id, CreatedTime, UpdatedTime and Deleted attributes are a convention I use in all my tables, so I have tried to bring this fact into an additional abstraction layer. I have a project DatabaseFramework which has three important classes: Entity: an abstract class that defines these four properties. All 'entity objects' (in this case Person and Category) must inherit Entity. IEntityManager: a generic interface (type parameter as Entity) that defines methods like Load, Insert, Update, etc. NHibernateEntityManager: an implementation of this interface using NHibernate to do the loading, saving, etc. Now, the Person and Category classes are straightforward, they just define the attributes of the tables of course (keeping in mind that four of them are in the base Entity class). Since the Persons table is related to the Categories table via the CategoryId attribute, the Person class has a Category property that holds the related category. However, in my webpage, I will also need the name of this category (CategoryName), for databinding purposes for example. So I created an additional property CategoryName that returns the CategoryName property of the current Category property, or an empty string if the Category is null: Namespace Database Public Class Person Inherits DatabaseFramework.Entity Public Overridable Property Firstname As String Public Overridable Property Lastname As String Public Overridable Property Category As Category Public Overridable ReadOnly Property CategoryName As String Get Return If(Me.Category Is Nothing, _ String.Empty, _ Me.Category.CategoryName) End Get End Property End Class End Namespace I am mapping the Person class using this mapping file. The many-to-one relation was suggested by Yads in another thread: <id name="Id" column="Id" type="int" unsaved-value="0"> <generator class="identity" /> </id> <property name="CreatedTime" type="DateTime" not-null="true" /> <property name="UpdatedTime" type="DateTime" not-null="true" /> <property name="Deleted" type="Boolean" not-null="true" /> <property name="Firstname" type="String" /> <property name="Lastname" type="String" /> <many-to-one name="Category" column="CategoryId" class="NHibernateWebTest.Database.Category, NHibernateWebTest" /> (I can't get it to show the root node, this forum hides it, I don't know how to escape the html-like tags...) The final important detail is the Load method of the NHibernateEntityManager implementation. (This is in C# as it's in a different project, sorry about that). I simply open a new ISession (ISessionFactory.OpenSession) in the GetSession method and then use that to fill an EntityCollection(Of TEntity) which is just a collection inheriting System.Collections.ObjectModel.Collection(Of T). public virtual EntityCollection< TEntity Load() { using (ISession session = this.GetSession()) { var entities = session .CreateCriteria(typeof (TEntity)) .Add(Expression.Eq("Deleted", false)) .List< TEntity (); return new EntityCollection< TEntity (entities); } } (Again, I can't get it to format the code correctly, it hides the generic type parameters, probably because it reads the angled symbols as a HTML tag..? If you know how to let me do that, let me know!) Now, the idea of this Load method is that I get a fully functional collection of Persons, all their properties set to the correct values (including the Category property, and thus, the CategoryName property should return the correct name). However, it seems that is not the case. When I try to data-bind the result of this Load method to a GridView in ASP.NET, it tells me this: Property accessor 'CategoryName' on object 'NHibernateWebTest.Database.Person' threw the following exception:'Could not initialize proxy - the owning Session was closed.' The exception occurs on the DataBind method call here: public virtual void LoadGrid() { if (this.Grid == null) return; this.Grid.DataSource = this.Manager.Load(); this.Grid.DataBind(); } Well, of course the session is closed, I closed it via the using block. Isn't that the correct approach, should I keep the session open? And for how long? Can I close it after the DataBind method has been run? In each case, I'd really like my Load method to just return a functional collection of items. It seems to me that it is now only getting the Category when it is required (eg, when the GridView wants to read the CategoryName, which wants to read the Category property), but at that time the session is closed. Is that reasoning correct? How do I stop this behavior? Or shouldn't I? And what should I do otherwise? Thanks!

    Read the article

  • Using tinybutstrong templating system, how do I have a (secondary) mysql query (sub-block), inside a

    - by desbest
    This is the code that works well. $TBS->MergeBlock("items",$conn,"SELECT * FROM items WHERE categoryid = $getcategory"); and it has a good job of looping the items with no problem. <tr id="itemid_[items.id;block=tr]"> <td height="30"> <!-- <img src="move.png" align="left" style="margin-right: 8px;"> --> <div class="lowlight">[items.title] </div> </td> <td height="30"> <div class="editable" itemid="$item[id]">[items.quantity]</div> </td> <td height="30"> <!-- <img src="pencil.png" id="edititem"width="24" height="24"> --> &nbsp; &nbsp; &nbsp; &nbsp;<img src="icons/folder.png" id="category_[items.categoryid]";" class="changecategory" categoryid="[items.categoryid]" itemid="[items.id]" itemtitle="[items.title]" title="Change category" width="24" height="24"> &nbsp; &nbsp; &nbsp; &nbsp; <a href="index.php?category=[var.getcategory]&deleteitem=[items.id]" title="Delete item" onclick="return confirm('Are you sure you want to delete [items.title]?')"><img src="icons/trash.png" id="deleteitem" width="24" height="24"></a> </td> </tr> This is where the problem lies. In the table row I would like a secondary mysql merge block based on the id column that the primary (items) mysql table has. This means that ideally I would love to do this. $TBS->MergeBlock("fields",$conn,"SELECT * FROM items WHERE categoryid = [items.id]"); So then when I used [fields.value;block=div] inside the table row, it would pick up the a row from the fields table based on the primary mysql query. It would recognise what the id is for the merged block and perform a mysql query based on a variable that that block has. So that's what I would like to do and that's what I call a secondary mysql merge block. This is what I attempted by using the sub-blocks example and modifying it. index.html <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>TinyButStrong - Examples - subblocks</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <link href="tbs_us_examples_0styles.css" rel="stylesheet" type="text/css"> <style type="text/css"> <!-- .border02 { border: 2px solid #39C; } .border03 { border: 2px solid #396; } --> </style> </head> <body> <table width="400" align="center" cellpadding="5" cellspacing="0" style="border-collapse:collapse;"> <tr> <td class="border02"><strong>Item name:</strong> [item.title;block=tr] , <strong>Item quantity:</strong> [item.quantity]<br> <table border="1" align="center" cellpadding="2" cellspacing="0"> <tr bgcolor="#CACACA"> <td width="30"><u>Position</u></td> <td width="150"><u>Attribute</u></td> <td width="50"><u>Value</u></td> <td width="100"><div align="center"><u>Date Number</u></div></td> </tr> <tr bgcolor="#F0F0F0"> <td>[field.#]</td> <td>[field.attribute;block=tr;p1=[item.id]] <br><b>[item.id]</b> </td> <td><div align="right">[field.value]</div></td> <td><div align="center">[field.datenumber;frm='mm-dd-yyyy']</div></td> </tr> </table> </td> </tr> </table> </body> </html> index.php <?php require_once('../stockman-v3/tbs_class.php'); require_once('../stockman-v3/config.php'); // Create data $TeamList[0] = array('team'=>'Eagle' ,'total'=>'458'); //{ $TeamList[0]['matches'][] = array('town'=>'London','score'=>'253','date'=>'1999-11-30'); $TeamList[0]['matches'][] = array('town'=>'Paris' ,'score'=>'145','date'=>'2002-07-24'); $TeamList[1] = array('team'=>'Goonies','total'=>'281'); $TeamList[1]['matches'][] = array('town'=>'New-York','score'=>'365','date'=>'2001-12-25'); $TeamList[1]['matches'][] = array('town'=>'Madrid' ,'score'=>'521','date'=>'2004-01-14'); // } $TeamList[2] = array('team'=>'MIB' ,'total'=>'615'); // { $TeamList[2]['matches'][] = array('town'=>'Dallas' ,'score'=>'362','date'=>'2001-01-02'); $TeamList[2]['matches'][] = array('town'=>'Lyon' ,'score'=>'321','date'=>'2002-11-17'); $TeamList[2]['matches'][] = array('town'=>'Washington','score'=>'245','date'=>'2003-08-24'); // } $TBS = new clsTinyButStrong; $TBS->LoadTemplate('index3.html'); // Automatic subblock $TBS->MergeBlock("item",$conn,"SELECT * FROM items"); $TBS->MergeBlock("field",$conn,"SELECT * FROM fields WHERE itemid='[%p1%]' "); $TBS->MergeBlock('teamKEY',$TeamList); // Subblock with a dynamic query //$TBS->MergeBlock('match','array','TeamList[%p1%][matches]'); $TBS->MergeBlock("match",$conn,"SELECT * FROM fields WHERE id='[%p1%]' "); $TBS->Show(); ?> please note what i am trying to do If I was to change $TBS->MergeBlock("field",$conn,"SELECT * FROM fields WHERE itemid='[%p1%]' "); to... $TBS->MergeBlock("field",$conn,"SELECT * FROM fields"); It would show all the items.

    Read the article

  • Databinding in combo box

    - by muralekarthick
    Hi I have two forms, and a class, queries return in Stored procedure. Stored Procedure: ALTER PROCEDURE [dbo].[Payment_Join] @reference nvarchar(20) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here SELECT p.iPaymentID,p.nvReference,pt.nvPaymentType,p.iAmount,m.nvMethod,u.nvUsers,p.tUpdateTime FROM Payment p, tblPaymentType pt, tblPaymentMethod m, tblUsers u WHERE p.nvReference = @reference and p.iPaymentTypeID = pt.iPaymentTypeID and p.iMethodID = m.iMethodID and p.iUsersID = u.iUsersID END payment.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using System.Data.SqlClient; using System.Windows.Forms; namespace Finance { class payment { string connection = global::Finance.Properties.Settings.Default.PaymentConnectionString; #region Fields int _paymentid = 0; string _reference = string.Empty; string _paymenttype; double _amount = 0; string _paymentmethod; string _employeename; DateTime _updatetime = DateTime.Now; #endregion #region Properties public int paymentid { get { return _paymentid; } set { _paymentid = value; } } public string reference { get { return _reference; } set { _reference = value; } } public string paymenttype { get { return _paymenttype; } set { _paymenttype = value; } } public string paymentmethod { get { return _paymentmethod; } set { _paymentmethod = value; } } public double amount { get { return _amount;} set { _amount = value; } } public string employeename { get { return _employeename; } set { _employeename = value; } } public DateTime updatetime { get { return _updatetime; } set { _updatetime = value; } } #endregion #region Constructor public payment() { } public payment(string refer) { reference = refer; } public payment(int paymentID, string Reference, string Paymenttype, double Amount, string Paymentmethod, string Employeename, DateTime Time) { paymentid = paymentID; reference = Reference; paymenttype = Paymenttype; amount = Amount; paymentmethod = Paymentmethod; employeename = Employeename; updatetime = Time; } #endregion #region Methods public void Save() { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("payment_create", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@reference", reference)); command.Parameters.Add(new SqlParameter("@paymenttype", paymenttype)); command.Parameters.Add(new SqlParameter("@amount", amount)); command.Parameters.Add(new SqlParameter("@paymentmethod", paymentmethod)); command.Parameters.Add(new SqlParameter("@employeename", employeename)); command.Parameters.Add(new SqlParameter("@updatetime", updatetime)); connect.Open(); command.ExecuteScalar(); connect.Close(); } catch { } } public void Load(string reference) { try { SqlConnection connect = new SqlConnection(connection); SqlCommand command = new SqlCommand("Payment_Join", connect); command.CommandType = CommandType.StoredProcedure; command.Parameters.Add(new SqlParameter("@Reference", reference)); //MessageBox.Show("ref = " + reference); connect.Open(); SqlDataReader reader = command.ExecuteReader(); while (reader.Read()) { this.reference = Convert.ToString(reader["nvReference"]); // MessageBox.Show(reference); // MessageBox.Show("here"); // MessageBox.Show("payment type id = " + reader["nvPaymentType"]); // MessageBox.Show("here1"); this.paymenttype = Convert.ToString(reader["nvPaymentType"]); // MessageBox.Show(paymenttype.ToString()); this.amount = Convert.ToDouble(reader["iAmount"]); this.paymentmethod = Convert.ToString(reader["nvMethod"]); this.employeename = Convert.ToString(reader["nvUsers"]); this.updatetime = Convert.ToDateTime(reader["tUpdateTime"]); } reader.Close(); } catch (Exception ex) { MessageBox.Show("Check it again" + ex); } } #endregion } } i have already binded the combo box items through designer, When i run the application i just get the reference populated in form 2 and combo box just populated not the particular value which is fetched. New to c# so help me to get familiar

    Read the article

  • Symfony2 same form, different entities NOT related

    - by user1381537
    I'm trying to write one form for submitting against MySQL DB, but I can't get it working, I've tried a lot of things (separate forms, create an ->add('foo', new foo()) to a field, and trying to parse plain SQL with a normal HTML form is my only solution, which is obviously not the best. This is my DB structure: As you can see I need to insert the comments textarea to ticketcomments among the user who wrote it, etc. On crmentity the description field. Then on ticketcf the fields that I need to submit from form, are this (because you wont know if I don't tell you because of the field names): tcf.cf594 AS Type, tcf.cf675 AS Suscription, tcf.cf770 AS ID_PRODUCT, tcf.cf746 AS NotificationDate, tcf.cf747 AS ResponseDate, tcf.cf748 AS ResolutionDate, And, of course, every table needs to have the same ticketid id for the submitted form, so we can retrieve it with one simple query. It will be easy to do with plain SQL instead of using DQL and Symfony2 forms, but is not a good way to do it. Also, here's my "Ticket list" query, if you need it to have it more clear... SELECT t.ticketNo AS Ticket, t.title AS Asunto, t.status AS Estado, t.updateLog AS LOG, t.hours AS Horas, t.solution AS Solucion, t.priority AS Prioridad, tcf.cf594 AS Tipo, tcf.cf675 AS Suscripcion, tcf.cf770 AS IDPROD, tcf.cf746 AS F_Noti, tcf.cf747 AS F_Resp, tcf.cf748 AS F_Reso, CONCAT (cd.firstname, cd.lastname) AS Contacto, crm.description AS Descripcion, crm.crmid AS id FROM WbsGoclientsBundle:VtigerTroubletickets t INNER JOIN WbsGoclientsBundle:VtigerTicketcf tcf WITH t.ticketid = tcf.ticketid INNER JOIN WbsGoclientsBundle:VtigerContactdetails cd WITH t.parentId = cd.contactid INNER JOIN WbsGoclientsBundle:VtigerCrmentity crm WITH t.ticketid = crm.crmid WHERE t.parentId IN ( SELECT cd1.contactid FROM WbsGoclientsBundle:VtigerContactdetails cd1 WHERE cd1.accountid = ( SELECT cd2.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd2 WHERE cd2.contactid = :contactid)) AND t.status <> \'Closed\' And also "Ticket details" query (which is not in DQL format yet, only SQL) is so simple, it only retrieve the comments field and createdtime from ticketcomments appended to this query so we have all the fields... Thank you. This is a test form, using troubletickets and ticketcomments, it's returning errores because I can't set a comments field because troubletickets doesn't has it, but I need that field to be submitted to ticketcomments ... VtigerTicketcommentsType <?php namespace WbsGo\clientsBundle\Form\Type; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class VtigerTicketcommentsType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('ticketid') ->add('comments') ->add('ownerid') ->add('ownertype') ->add('createdtype') ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTicketcomments' )); } public function getName() { return 'comments'; } } OpenTicketType.php <?php namespace WbsGo\clientsBundle\Form; use Symfony\Component\Form\AbstractType, Symfony\Component\Form\FormBuilderInterface ; use WbsGo\clientsBundle\Form\Type\VtigerTicketcommentsType; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class OpenTicketType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('title') ->add('priority') ->add('solution') ->add('comments', 'collection', array( 'type' => new VtigerTicketcommentsType() )) ; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'data_class' => 'WbsGo\clientsBundle\Entity\VtigerTroubletickets' )); } public function getName() { return 'ticket'; } } TicketController.php <?php namespace WbsGo\clientsBundle\Controller; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use WbsGo\clientsBundle\Entity\VtigerTroubletickets; use WbsGo\clientsBundle\Entity\VtigerTicketcomments; use WbsGo\clientsBundle\Form\OpenTicketType; use Symfony\Component\HttpFoundation\Request; class TicketController extends Controller { public function indexAction() { $em = $this->getDoctrine()->getManager(); $tickets = $em ->getRepository('WbsGoclientsBundle:VtigerTroubletickets') ->findAllOpenByCustomerId($this->getUser()->getId()); $userdata = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerContactdetails') ->findContact($this->getUser()->getId()); return $this ->render('WbsGoclientsBundle:Ticket:index.html.twig', array('tickets' => $tickets, 'userdata' => $userdata)); } public function addAction() { $assets = $this->getDoctrine()->getManager() ->getRepository('WbsGoclientsBundle:VtigerAssets') ->findAssetByAccountId($this->getUser()->getId()); $assetlist = array(); foreach ($assets as $key => $v) { $assetlist[$key] = $key; } $form = $this->createForm(new OpenTicketType(), new VtigerTroubletickets()); return $this ->render('WbsGoclientsBundle:Ticket:add.html.twig', array('form' => $form->createView(), 'assets' => $assets,)); } } This is the error Symfony2 is returning Neither the property "comments" nor one of the methods "getComments()", "isComments()", "hasComments()", "_get()" or "_call()" exist and have public access in class "WbsGo\clientsBundle\Entity\VtigerTroubletickets". EDIT 2 This code is actually rendering my forms, but I need help in order to submit each XXXType form to its corresponding table. public function buildForm(FormBuilderInterface $builder, array $options) { $builder ->add('descripcion') ->add('prioridad') ->add('solucion') ->add('comment', new VtigerTicketcommentsType() ) ->add('contacto') ->add('suscripcion') ->add('producto', 'entity', array( 'class' => 'WbsGo\clientsBundle\Entity\VtigerAssets', 'property' => 'assetname', 'empty_value' => '--SELECT--', 'query_builder' => function(\WbsGo\clientsBundle\Entity\VtigerAssetsRepository $repository) { //return $repository->findAssetByAccountId($this->customerId); return $repository->createQueryBuilder('a') ->select('a') ->where('a.account = (SELECT cd.accountid FROM WbsGoclientsBundle:VtigerContactdetails cd WHERE cd.contactid = ?1)') ->setParameter(1, $this->customerId); } ) ) ->add('hardware') ->add('backup') ->add('web') ->add('restore') ->add('customerId') ; } I also removed ->add('ticketid') from VtigerTicketcommentsType.php because it has relationship and is not needed. it's auto_incremental and must be generated once everything is submitted.

    Read the article

  • JAVA-SQL- Data Migration - ResultSets comparing Failing JUnit test

    - by user1865053
    I CANNOT get this JUnit Test to pass for the life of me. Can somebody point out where this has gone wrong. I am doing a data migration(MSSQL SERVER 2005), but I have the sourceDBUrl and the targetDCUrl the same URL so to narrow it down to syntax errors. So that is what I have, a syntax error. I am comparing the results of a table for the query SELECT programmeapproval, resourceapproval FROM tr_timesheet WHERE timesheetid = ? and the test always fails, but passes for other junit tests I have developed. I created 3 diffemt resultSetsEqual methods and none work. Yet, some other JUnit tests I have developed have PASSED. THE QUERY: SELECT timesheetid, programmeapproval, resourceapproval FROM tr_timesheet Returns three columns timesheetid (PK,int, not null) (populated with a range of numbers 2240 - 2282) programmeapproval (smallint,not null) (populated with the number 1 in every field) resourceapproval (smallint, not null) (populated with a number 1 in every field) When I run the query that is embedded in the code it only returns one row with the programmeapproval and resourceapproval columns and both field populated with the number 1. I have all jdbc drivers correctly installed and tested for connectivity. The JUnit Test is failing at this point according to the IDE. assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); This is the code: /*THIS IS A JUNIT CLASS****? package a7.unittests.dao; import static org.junit.Assert.assertTrue; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Types; import org.junit.Test; import artemispm.tritonalerts.TimesheetAlert; public class UnitTestTimesheetAlert { @Test public void testQUERY_CHECKALERT() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); PreparedStatement stmt = con.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1, 2240); ResultSet sourceVal = stmt.executeQuery(); stmt = conTarget.prepareStatement("select programmeapproval, resourceapproval from tr_timesheet where timesheetid = ?"); stmt.setInt(1,2240); ResultSet targetVal = stmt.executeQuery(); assertTrue(helper.resultSetsEqual2(sourceVal,targetVal)); }} /*END**/ /*THIS IS A REGULAR CLASS**/ package a7.unittests.dao; import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; public class UnitTestHelper { static String sourceDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; static String targetDBUrl = "jdbc:sqlserver://127.0.0.1:1433;databaseName=a7itm;user=a7user;password=a7user"; public Connection getConnection(String url)throws Exception{ return DriverManager.getConnection(url); } public boolean resultSetsEqual3 (ResultSet rs1, ResultSet rs2) throws SQLException { int col = 1; //ResultSetMetaData metadata = rs1.getMetaData(); //int count = metadata.getColumnCount(); while (rs1.next() && rs2.next()) { final Object res1 = rs1.getObject(col); final Object res2 = rs2.getObject(col); // Check values if (!res1.equals(res2)) { throw new RuntimeException(String.format("%s and %s aren't equal at common position %d", res1, res2, col)); } // rs1 and rs2 must reach last row in the same iteration if ((rs1.isLast() != rs2.isLast())) { throw new RuntimeException("The two ResultSets contains different number of columns!"); } } return true; } public boolean resultSetsEqual (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i) != target.getObject(i)) { return false; } } } return true; } public boolean resultSetsEqual2 (ResultSet source, ResultSet target) throws SQLException{ while(source.next()) { target.next(); ResultSetMetaData metadata = source.getMetaData(); int count = metadata.getColumnCount(); for (int i =1; i<=count; i++) { if(source.getObject(i).equals(target.getObject(i))) { return false; } } } return true; } } /END***/ /*PASTED NEW CLASS - THIS IS A JUNIT TEST CLASS*/ package a7.unittests.dao; import static org.junit.Assert.*; import java.sql.Connection; import java.sql.DriverManager; import org.junit.Test; public class TestDatabaseConnection { @Test public void testConnection() throws Exception{ UnitTestHelper helper = new UnitTestHelper(); Connection con = helper.getConnection(helper.sourceDBUrl); Connection conTarget = helper.getConnection(helper.targetDBUrl); assertTrue(con != null && conTarget != null); } } /**END***/

    Read the article

  • Inconsistent results in R with RNetCDF - why?

    - by sarcozona
    I am having trouble extracting data from NetCDF data files using RNetCDF. The data files each have 3 dimensions (longitude, latitude, and a date) and 3 variables (latitude, longitude, and a climate variable). There are four datasets, each with a different climate variable. Here is some of the output from print.nc(p8m.tmax) for clarity. The other datasets are identical except for the specific climate variable. dimensions: month = UNLIMITED ; // (1368 currently) lat = 3105 ; lon = 7025 ; variables: float lat(lat) ; lat:long_name = "latitude" ; lat:standard_name = "latitude" ; lat:units = "degrees_north" ; float lon(lon) ; lon:long_name = "longitude" ; lon:standard_name = "longitude" ; lon:units = "degrees_east" ; short tmax(lon, lat, month) ; tmax:missing_value = -9999 ; tmax:_FillValue = -9999 ; tmax:units = "degree_celsius" ; tmax:scale_factor = 0.01 ; tmax:valid_min = -5000 ; tmax:valid_max = 6000 ; I am getting behavior I don't understand when I use the var.get.nc function from the RNetCDF package. For example, when I attempt to extract 82 values beginning at stval from the maximum temperature data (p8m.tmax <- open.nc(tmaxdataset.nc)) with > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) (where lon_val and lat_val specify the location in the dataset of the coordinates I'm interested in and stval is stval is set to which(time_vec==200201), which in this case equaled 1285.) I get Error: Invalid argument But after successfully extracting 80 and 81 values > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,80)) > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,81)) the command with 82 works: > var.get.nc(p8m.tmax,'tmax', start=c(lon_val, lat_val, stval),count=c(1,1,82)) [1] 444 866 1063 ... [output snipped] The same problem occurs in the identically structured tmin file, but at 36 instead of 82: > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval),count=c(1,1,36)) produces Error: Invalid argument But after repeating with counts of 30, 31, etc > var.get.nc(p8m.tmin,'tmin', start=c(lon_val, lat_val, stval), count=c(1,1,36)) works. These examples make it seem like the function is failing at the last count, but that actually isn't the case. In the first example, var.get.nc gave Error: Invalid argument after I asked for 84 values. I then narrowed the failure down to the 82nd count by varying the starting point in the dataset and asking for only 1 value at a time. The particular number the problem occurs at also varies. I can close and reopen the dataset and have the problem occur at a different location. In the particular examples above, lon_val and lat_val are 1595 and 1751, respectively, identifying the location in the dataset along the lat and lon dimensions for the latitude and longitude I'm interested in. The 1595th latitude and 1751th longitude are not the problem, however. The problem occurs with all other latitude and longitudes I've tried. Varying the starting location in the dataset along the climate variable dimension (stval) and/or specifying it different (as a number in the command instead of the object stval) also does not fix the problem. This problem doesn't always occur. I can run identical code three times in a row (clearing all objects in between runs) and get a different outcome each time. The first run may choke on the 7th entry I'm trying to get, the second might work fine, and the third run might choke on the 83rd entry. I'm absolutely baffled by such inconsistent behavior. The open.nc function has also started to fail with the same Error: Invalid argument. Like the var.get.nc problems, it also occurs inconsistently. Does anyone know what causes the initial failure to extract the variable? And how I might prevent it? Could have to do with the size of the data files (~60GB each) and/or the fact that I'm accessing them through networked drives? This was also asked here: https://stat.ethz.ch/pipermail/r-help/2011-June/281233.html > sessionInfo() R version 2.13.0 (2011-04-13) Platform: i386-pc-mingw32/i386 (32-bit) locale: [1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] reshape_0.8.4 plyr_1.5.2 RNetCDF_1.5.2-2 loaded via a namespace (and not attached): [1] tools_2.13.0

    Read the article

  • Objects in Java ArrayList don't get updated.

    - by Sbm007
    This is going to be a very long post, hopefully you can understand what I'm talking about and I appreciate any help. Thanks Basically, I've created a personal, non-commercial project (which I don't plan to release) that can read ZIP and RAR files. It can only read the contents in the archive, the folders inside, the files inside the folders and its properties (such as last modified date, last modified time, CRC checksum, uncompressed size, compressed size and file name). It can't extract files either, so it's really a ZIP/RAR viewer if you may. Anyway that's slightly irrelevant to my problem but I thought I'd give you some background info. Now for my problem: I can successfully list all the folders and files inside a ZIP archive, so now I want to take that raw input and link it together in some useful way. I made 2 classes: ArchiveFile (represents a file inside a ZIP) and ArchiveFolder (represents a folder inside a ZIP). They both have some useful methods such as getLastModifiedDate, getName, getPath and so on. But the difference is that ArchiveFolder can hold an ArrayList of ArchiveFile's and additional ArchiveFolder's (think of this as files and folders inside a folder). Now I want to populate my raw input into one root ArchiveFolder, which will have all the files in the root dir of the ZIP in the ArchiveFile's ArrayList and any additional folders in the root dir of the ZIP in the ArchiveFolder's ArrayList (and this process can continue on like this like a chain reaction (more files/folders in that ArchiveFolder etc etc). So I came up with the following code: while (archive.hasMore()) { String path = ""; ArchiveFolder current = root; String[] contents = archive.getName().split("/"); for (int x = 0; x < contents.length; ++x) { if (x == (contents.length - 1) && !archive.getName().endsWith("/")) { // If on last item and item is a file path += contents[x]; // Update final path ArchiveFile file = new ArchiveFile(path, contents[x], archive.getUncompressedSize(), archive.getCompressedSize(), archive.getModifiedTime(), archive.getModifiedDate(), archive.getCRC()); current.addFile(file); // Create and add the file to the current ArchiveFolder } else if (x == (contents.length - 1)) { // Else if we are on last item and it is a folder path += contents[x] + "/"; // Update final path ArchiveFolder folder = new ArchiveFolder(path, contents[x], archive.getModifiedTime(), archive.getModifiedDate()); current.addFolder(folder); // Create and add this folder to the current ArchiveFile } else { // Else if we are still traversing through the path path += contents[x] + "/"; // Update path ArchiveFolder folder = new ArchiveFolder(path, contents[x]); current.addFolder(folder); // Create and add folder (remember we do not know the modified date/time as all we know is the path, so we can deduce the name only) current = folder; // Update current ArchiveFolder to the newly created one for the next iteration of the for loop } } archive.getNext(); } Assume that root is the root ArchiveFolder (initially empty). And that archive.getName() returns the name of the current file OR folder in the following fashion: file.txt or folder1/file2.txt or folder4/folder2/ (this is a empty folder) etc. So basically the relative path from the root of the ZIP archive. Please read through the comments in the above code to familiarize yourself with it. Also assume that the addFolder method in an ArchiveFile, only adds the folder if it doesn't exist already (so there are no multiple folders) and it also updates the time and date of an existing folder if it is blank (ie it was a intermediate folder we only knew the name of, but now we know its details). The code for addFolder is (pretty self-explanitory): public void addFolder(ArchiveFolder folder) { int loc = folders.indexOf(folder); // folders is the ArrayList containing ArchiveFolder's if (loc == -1) { folders.add(folder); } else { ArchiveFolder real = folders.get(loc); if (real.time == null) { real.setTime(folder.getTime()); real.setDate(folder.getDate()); } } } So I can't see anything wrong with the code, it works and after finishing, the root ArchiveFolder contains all the files in the root of the ZIP as I want it to, and it contains all the direcories in the root folder as I want it to. So you'd think it works as expected, but no the ArchiveFolder's in the root folder don't contain the data inside those 'child' folders, it's just a blank folder with no additional files and folders (while it does really contain some more files/folders when viewed in WinZip). After debugging using Eclipse, the for loop does iterate through all the files (even those not included above), so this led me to believe that there is a problem with this line of the code: current = folder; What it does is, it updates the current folder (used as an intermediate by the loop) to the newly added folder. I thought Java passed by reference and thus all new operations and new additions in future ArchiveFile's and ArchiveFolder's are automatically updated, and parent ArchiveFolder's will be updated accordingly. But that does not appear to be the case? I know this is a long ass post and I really hope anyone can help me out with this. Thanks in advance.

    Read the article

  • Backbone.js Model validation fails to prevent Model from saving

    - by Benjen
    I have defined a validate method for a Backbone.js Model. The problem is that even if validation fails (i.e. the Model.validate method returns a value) the post/put request is still sent to the server. This contradicts what is explained in the Backbone.js documentation. I cannot understand what I am doing wrong. The following is the Model definition: /** * Model - Contact */ var Contact = Backbone.Model.extend({ urlRoot: '/contacts.json', idAttribute: '_id', defaults: function() { return { surname: '', given_name: '', org: '', phone: new Array(), email: new Array(), address: new Array({ street: '', district: '', city: '', country: '', postcode: '' }) }; } validate: function(attributes) { if (typeof attributes.validationDisabled === 'undefined') { var errors = new Array(); // Validate surname. if (_.isEmpty(attributes.surname) === true) { errors.push({ type: 'form', attribute: 'surname', message: 'Please enter a surname.' }); } // Validate emails. if (_.isEmpty(attributes.email) === false) { var emailRegex = /^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,6}$/i; // Stores indexes of email values which fail validation. var emailIndex = new Array(); _.each(attributes.email, function(email, index) { if (emailRegex.test(email.value) === false) { emailIndex.push(index); } }); // Create error message. if (emailIndex.length > 0) { errors.push({ type: 'form', attribute: 'email', index: emailIndex, message: 'Please enter valid email address.' }); } } if (errors.length > 0) { console.log('Form validation failed.'); return errors; } } } }); Here is the View which calls the Model.save() method (see: method saveContact() below). Note that other methods belonging to this View have not been included below for reasons of brevity. /** * View - Edit contact form */ var EditContactFormView = Backbone.View.extend({ initialize: function() { _.bindAll(this, 'createDialog', 'formError', 'render', 'saveContact', 'updateContact'); // Add templates. this._editFormTemplate = _.template($('#edit-contact-form-tpl').html()); this._emailFieldTemplate = _.template($('#email-field-tpl').html()); this._phoneFieldTemplate = _.template($('#phone-field-tpl').html()); // Get URI of current page. this.currentPageUri = this.options.currentPageUri; // Create array to hold references to all subviews. this.subViews = new Array(); // Set options for new or existing contact. this.model = this.options.model; // Bind with Model validation error event. this.model.on('error', this.formError); this.render(); } /** * Deals with form validation errors */ formError: function(model, error) { console.log(error); }, saveContact: function(event) { var self = this; // Prevent submit event trigger from firing. event.preventDefault(); // Trigger form submit event. eventAggregator.trigger('submit:contactEditForm'); // Update model with form values. this.updateContact(); // Enable validation for Model. Done by unsetting validationDisabled // attribute. This setting was formerly applied to prevent validation // on Model.fetch() events. See this.model.validate(). this.model.unset('validationDisabled'); // Save contact to database. this.model.save(this.model.attributes, { success: function(model, response) { if (typeof response.flash !== 'undefined') { Messenger.trigger('new:messages', response.flash); } }, error: function(model, response) { console.log(response); throw error = new Error('Error occured while trying to save contact.'); } }, { wait: true }); }, /** * Extract form values and update Contact. */ updateContact: function() { this.model.set('surname', this.$('#surname-field').val()); this.model.set('given_name', this.$('#given-name-field').val()); this.model.set('org', this.$('#org-field').val()); // Extract address form values. var address = new Array({ street: this.$('input[name="street"]').val(), district: this.$('input[name="district"]').val(), city: this.$('input[name="city"]').val(), country: this.$('input[name="country"]').val(), postcode: this.$('input[name="postcode"]').val() }); this.model.set('address', address); } });

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • FreeBSD performance tuning. Sysctls, loader.conf, kernel

    - by SaveTheRbtz
    I wanted to share knowledge of tuning FreeBSD via sysctl.conf/loader.conf/KENCONF. It was initially based on Igor Sysoev's (author of nginx) presentation about FreeBSD tuning up to 100,000-200,000 active connections. Tunings are for FreeBSD-CURRENT. Since 7.2 amd64 some of them are tuned well by default. Prior 7.0 some of them are boot only (set via /boot/loader.conf) or does not exist at all. sysctl.conf: # No zero mapping feature # May break wine # (There are also reports about broken samba3) #security.bsd.map_at_zero=0 # If you have really busy webserver with apache13 you may run out of processes #kern.maxproc=10000 # Same for servers with apache2 / Pound #kern.threads.max_threads_per_proc=4096 # Max. backlog size kern.ipc.somaxconn=4096 # Shared memory // 7.2+ can use shared memory > 2Gb kern.ipc.shmmax=2147483648 # Sockets kern.ipc.maxsockets=204800 # Can cause this on older kernels: # http://old.nabble.com/Significant-performance-regression-for-increased-maxsockbuf-on-8.0-RELEASE-tt26745981.html#a26745981 ) kern.ipc.maxsockbuf=10485760 # Mbuf 2k clusters (on amd64 7.2+ 25600 is default) # For such high value vm.kmem_size must be increased to 3G kern.ipc.nmbclusters=262144 # Jumbo pagesize(_SC_PAGESIZE) clusters # Used as general packet storage for jumbo frames # can be monitored via `netstat -m` #kern.ipc.nmbjumbop=262144 # Jumbo 9k/16k clusters # If you are using them #kern.ipc.nmbjumbo9=65536 #kern.ipc.nmbjumbo16=32768 # For lower latency you can decrease scheduler's maximum time slice # default: stathz/10 (~ 13) #kern.sched.slice=1 # Increase max command-line length showed in `ps` (e.g for Tomcat/Java) # Default is PAGE_SIZE / 16 or 256 on x86 # This avoids commands to be presented as [executable] in `ps` # For more info see: http://www.freebsd.org/cgi/query-pr.cgi?pr=120749 kern.ps_arg_cache_limit=4096 # Every socket is a file, so increase them kern.maxfiles=204800 kern.maxfilesperproc=200000 kern.maxvnodes=200000 # On some systems HPET is almost 2 times faster than default ACPI-fast # Useful on systems with lots of clock_gettime / gettimeofday calls # See http://old.nabble.com/ACPI-fast-default-timecounter,-but-HPET-83--faster-td23248172.html # After revision 222222 HPET became default: http://svnweb.freebsd.org/base?view=revision&revision=222222 kern.timecounter.hardware=HPET # Small receive space, only usable on http-server, on file server this # should be increased to 65535 or even more #net.inet.tcp.recvspace=8192 # This is useful on Fat-Long-Pipes #net.inet.tcp.recvbuf_max=10485760 #net.inet.tcp.recvbuf_inc=65535 # Small send space is useful for http servers that serve small files # Autotuned since 7.x net.inet.tcp.sendspace=16384 # This is useful on Fat-Long-Pipes #net.inet.tcp.sendbuf_max=10485760 #net.inet.tcp.sendbuf_inc=65535 # Turn off receive autotuning # You can play with it. #net.inet.tcp.recvbuf_auto=0 #net.inet.tcp.sendbuf_auto=0 # This should be enabled if you going to use big spaces (>64k) # Also timestamp field is useful when using syncookies net.inet.tcp.rfc1323=1 # Turn this off on high-speed, lossless connections (LAN 1Gbit+) # If you set it there is no need in TCP_NODELAY sockopt (see man tcp) net.inet.tcp.delayed_ack=0 # This feature is useful if you are serving data over modems, Gigabit Ethernet, # or even high speed WAN links (or any other link with a high bandwidth delay product), # especially if you are also using window scaling or have configured a large send window. # Automatically disables on small RTT ( http://www.freebsd.org/cgi/cvsweb.cgi/src/sys/netinet/tcp_subr.c?#rev1.237 ) # This sysctl was removed in 10-CURRENT: # See: http://www.mail-archive.com/[email protected]/msg06178.html #net.inet.tcp.inflight.enable=0 # TCP slowstart algorithm tunings # We assuming we have very fast clients #net.inet.tcp.slowstart_flightsize=100 #net.inet.tcp.local_slowstart_flightsize=100 # Disable randomizing of ports to avoid false RST # Before usage check SA here www.bsdcan.org/2006/papers/ImprovingTCPIP.pdf # (it's also says that port randomization auto-disables at some conn.rates, but I didn't checked it thou) #net.inet.ip.portrange.randomized=0 # Increase portrange # For outgoing connections only. Good for seed-boxes and ftp servers. net.inet.ip.portrange.first=1024 net.inet.ip.portrange.last=65535 # # stops route cache degregation during a high-bandwidth flood # http://www.freebsd.org/doc/en/books/handbook/securing-freebsd.html #net.inet.ip.rtexpire=2 net.inet.ip.rtminexpire=2 net.inet.ip.rtmaxcache=1024 # Security net.inet.ip.redirect=0 net.inet.ip.sourceroute=0 net.inet.ip.accept_sourceroute=0 net.inet.icmp.maskrepl=0 net.inet.icmp.log_redirect=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 # # There is also good example of sysctl.conf with comments: # http://www.thern.org/projects/sysctl.conf # # icmp may NOT rst, helpful for those pesky spoofed # icmp/udp floods that end up taking up your outgoing # bandwidth/ifqueue due to all that outgoing RST traffic. # #net.inet.tcp.icmp_may_rst=0 # Security net.inet.udp.blackhole=1 net.inet.tcp.blackhole=2 # IPv6 Security # For more info see http://www.fosslc.org/drupal/content/security-implications-ipv6 # Disable Node info replies # To see this vulnerability in action run `ping6 -a sglAac ::1` or `ping6 -w ::1` on unprotected node net.inet6.icmp6.nodeinfo=0 # Turn on IPv6 privacy extensions # For more info see proposal http://unix.derkeiler.com/Mailing-Lists/FreeBSD/net/2008-06/msg00103.html net.inet6.ip6.use_tempaddr=1 net.inet6.ip6.prefer_tempaddr=1 # Disable ICMP redirect net.inet6.icmp6.rediraccept=0 # Disable acceptation of RA and auto linklocal generation if you don't use them #net.inet6.ip6.accept_rtadv=0 #net.inet6.ip6.auto_linklocal=0 # Increases default TTL, sometimes useful # Default is 64 net.inet.ip.ttl=128 # Lessen max segment life to conserve resources # ACK waiting time in miliseconds # (default: 30000. RFC from 1979 recommends 120000) net.inet.tcp.msl=5000 # Max bumber of timewait sockets net.inet.tcp.maxtcptw=200000 # Don't use tw on local connections # As of 15 Apr 2009. Igor Sysoev says that nolocaltimewait has some buggy realization. # So disable it or now till get fixed #net.inet.tcp.nolocaltimewait=1 # FIN_WAIT_2 state fast recycle net.inet.tcp.fast_finwait2_recycle=1 # Time before tcp keepalive probe is sent # default is 2 hours (7200000) #net.inet.tcp.keepidle=60000 # Should be increased until net.inet.ip.intr_queue_drops is zero net.inet.ip.intr_queue_maxlen=4096 # Interrupt handling via multiple CPU, but with context switch. # You can play with it. Default is 1; #net.isr.direct=0 # This is for routers only #net.inet.ip.forwarding=1 #net.inet.ip.fastforwarding=1 # This speed ups dummynet when channel isn't saturated net.inet.ip.dummynet.io_fast=1 # Increase dummynet(4) hash #net.inet.ip.dummynet.hash_size=2048 #net.inet.ip.dummynet.max_chain_len # Should be increased when you have A LOT of files on server # (Increase until vfs.ufs.dirhash_mem becomes lower) vfs.ufs.dirhash_maxmem=67108864 # Note from commit http://svn.freebsd.org/base/head@211031 : # For systems with RAID volumes and/or virtualization envirnments, where # read performance is very important, increasing this sysctl tunable to 32 # or even more will demonstratively yield additional performance benefits. vfs.read_max=32 # Explicit Congestion Notification (see http://en.wikipedia.org/wiki/Explicit_Congestion_Notification) net.inet.tcp.ecn.enable=1 # Flowtable - flow caching mechanism # Useful for routers #net.inet.flowtable.enable=1 #net.inet.flowtable.nmbflows=65535 # Extreme polling tuning #kern.polling.burst_max=1000 #kern.polling.each_burst=1000 #kern.polling.reg_frac=100 #kern.polling.user_frac=1 #kern.polling.idle_poll=0 # IPFW dynamic rules and timeouts tuning # Increase dyn_buckets till net.inet.ip.fw.curr_dyn_buckets is lower net.inet.ip.fw.dyn_buckets=65536 net.inet.ip.fw.dyn_max=65536 net.inet.ip.fw.dyn_ack_lifetime=120 net.inet.ip.fw.dyn_syn_lifetime=10 net.inet.ip.fw.dyn_fin_lifetime=2 net.inet.ip.fw.dyn_short_lifetime=10 # Make packets pass firewall only once when using dummynet # i.e. packets going thru pipe are passing out from firewall with accept #net.inet.ip.fw.one_pass=1 # shm_use_phys Wires all shared pages, making them unswappable # Use this to lessen Virtual Memory Manager's work when using Shared Mem. # Useful for databases #kern.ipc.shm_use_phys=1 # ZFS # Enable prefetch. Useful for sequential load type i.e fileserver. # FreeBSD sets vfs.zfs.prefetch_disable to 1 on any i386 systems and # on any amd64 systems with less than 4GB of avaiable memory # For additional info check this nabble thread http://old.nabble.com/Samba-read-speed-performance-tuning-td27964534.html #vfs.zfs.prefetch_disable=0 # On highload servers you may notice following message in dmesg: # "Approaching the limit on PV entries, consider increasing either the # vm.pmap.shpgperproc or the vm.pmap.pv_entry_max tunable" vm.pmap.shpgperproc=2048 loader.conf: # Accept filters for data, http and DNS requests # Useful when your software uses select() instead of kevent/kqueue or when you under DDoS # DNS accf available on 8.0+ accf_data_load="YES" accf_http_load="YES" accf_dns_load="YES" # Async IO system calls aio_load="YES" # Linux specific devices in /dev # As for 8.1 it only /dev/full #lindev_load="YES" # Adds NCQ support in FreeBSD # WARNING! all ad[0-9]+ devices will be renamed to ada[0-9]+ # 8.0+ only #ahci_load="YES" #siis_load="YES" # FreeBSD 8.2+ # New Congestion Control for FreeBSD # http://caia.swin.edu.au/urp/newtcp/tools/cc_chd-readme-0.1.txt # http://www.ietf.org/proceedings/78/slides/iccrg-5.pdf # Initial merge commit message http://www.mail-archive.com/[email protected]/msg31410.html #cc_chd_load="YES" # Increase kernel memory size to 3G. # # Use ONLY if you have KVA_PAGES in kernel configuration, and you have more than 3G RAM # Otherwise panic will happen on next reboot! # # It's required for high buffer sizes: kern.ipc.nmbjumbop, kern.ipc.nmbclusters, etc # Useful on highload stateful firewalls, proxies or ZFS fileservers # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #vm.kmem_size="3G" # If your server has lots of swap (>4Gb) you should increase following value # according to http://lists.freebsd.org/pipermail/freebsd-hackers/2009-October/029616.html # Otherwise you'll be getting errors # "kernel: swap zone exhausted, increase kern.maxswzone" # kern.maxswzone="256M" # Older versions of FreeBSD can't tune maxfiles on the fly #kern.maxfiles="200000" # Useful for databases # Sets maximum data size to 1G # (FreeBSD 7.2+ amd64 users: Check that current value is lower!) #kern.maxdsiz="1G" # Maximum buffer size(vfs.maxbufspace) # You can check current one via vfs.bufspace # Should be lowered/upped depending on server's load-type # Usually decreased to preserve kmem # (default is 10% of mem) #kern.maxbcache="512M" # Sendfile buffers # For i386 only #kern.ipc.nsfbufs=10240 # FreeBSD 9+ # HPET "legacy route" support. It should allow HPET to work per-CPU # See http://www.mail-archive.com/[email protected]/msg03603.html #hint.atrtc.0.clock=0 #hint.attimer.0.clock=0 #hint.hpet.0.legacy_route=1 # syncache Hash table tuning net.inet.tcp.syncache.hashsize=1024 net.inet.tcp.syncache.bucketlimit=512 net.inet.tcp.syncache.cachelimit=65536 # Increased hostcache # Later host cache can be viewed via net.inet.tcp.hostcache.list hidden sysctl # Very useful for it's RTT RTTVAR # Must be power of two net.inet.tcp.hostcache.hashsize=65536 # hashsize * bucketlimit (which is 30 by default) # It allocates 255Mb (1966080*136) of RAM net.inet.tcp.hostcache.cachelimit=1966080 # TCP control-block Hash table tuning net.inet.tcp.tcbhashsize=4096 # Disable ipfw deny all # Should be uncommented when there is a chance that # kernel and ipfw binary may be out-of sync on next reboot #net.inet.ip.fw.default_to_accept=1 # # SIFTR (Statistical Information For TCP Research) is a kernel module that # logs a range of statistics on active TCP connections to a log file. # See prerelease notes http://groups.google.com/group/mailing.freebsd.current/browse_thread/thread/b4c18be6cdce76e4 # and man 4 sitfr #siftr_load="YES" # Enable superpages, for 7.2+ only # Also read http://lists.freebsd.org/pipermail/freebsd-hackers/2009-November/030094.html vm.pmap.pg_ps_enabled=1 # Usefull if you are using Intel-Gigabit NIC #hw.em.rxd=4096 #hw.em.txd=4096 #hw.em.rx_process_limit="-1" # Also if you have ALOT interrupts on NIC - play with following parameters # NOTE: You should set them for every NIC #dev.em.0.rx_int_delay: 250 #dev.em.0.tx_int_delay: 250 #dev.em.0.rx_abs_int_delay: 250 #dev.em.0.tx_abs_int_delay: 250 # There is also multithreaded version of em/igb drivers can be found here: # http://people.yandex-team.ru/~wawa/ # # for additional em monitoring and statistics use # sysctl dev.em.0.stats=1 ; dmesg # sysctl dev.em.0.debug=1 ; dmesg # Also after r209242 (-CURRENT) there is a separate sysctl for each stat variable; # Same tunings for igb #hw.igb.rxd=4096 #hw.igb.txd=4096 #hw.igb.rx_process_limit=100 # Some useful netisr tunables. See sysctl net.isr #net.isr.maxthreads=4 #net.isr.defaultqlimit=4096 #net.isr.maxqlimit: 10240 # Bind netisr threads to CPUs #net.isr.bindthreads=1 # # FreeBSD 9.x+ # Increase interface send queue length # See commit message http://svn.freebsd.org/viewvc/base?view=revision&revision=207554 #net.link.ifqmaxlen=1024 # Nicer boot logo =) loader_logo="beastie" And finally here is KERNCONF: # Just some of them, see also # cat /sys/{i386,amd64,}/conf/NOTES # This one useful only on i386 #options KVA_PAGES=512 # You can play with HZ in environments with high interrupt rate (default is 1000) # 100 is for my notebook to prolong it's battery life #options HZ=100 # Polling is goot on network loads with high packet rates and low-end NICs # NB! Do not enable it if you want more than one netisr thread #options DEVICE_POLLING # Eliminate datacopy on socket read-write # To take advantage with zero copy sockets you should have an MTU >= 4k # This req. is only for receiving data. # Read more in man zero_copy_sockets # Also this epic thread on kernel trap: # http://kerneltrap.org/node/6506 # Here Linus says that "anybody that does it that way (FreeBSD) is totally incompetent" #options ZERO_COPY_SOCKETS # Support TCP sign. Used for IPSec options TCP_SIGNATURE # There was stackoverflow found in KAME IPSec stack: # See http://secunia.com/advisories/43995/ # For quick workaround you can use `ipfw add deny proto ipcomp` options IPSEC # This ones can be loaded as modules. They described in loader.conf section #options ACCEPT_FILTER_DATA #options ACCEPT_FILTER_HTTP # Adding ipfw, also can be loaded as modules options IPFIREWALL # On 8.1+ you can disable verbose to see blocked packets on ipfw0 interface. # Also there is no point in compiling verbose into the kernel, because # now there is net.inet.ip.fw.verbose tunable. #options IPFIREWALL_VERBOSE #options IPFIREWALL_VERBOSE_LIMIT=10 options IPFIREWALL_FORWARD # Adding kernel NAT options IPFIREWALL_NAT options LIBALIAS # Traffic shaping options DUMMYNET # Divert, i.e. for userspace NAT options IPDIVERT # This is for OpenBSD's pf firewall device pf device pflog # pf's QoS - ALTQ options ALTQ options ALTQ_CBQ # Class Bases Queuing (CBQ) options ALTQ_RED # Random Early Detection (RED) options ALTQ_RIO # RED In/Out options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC) options ALTQ_PRIQ # Priority Queuing (PRIQ) options ALTQ_NOPCC # Required for SMP build # Pretty console # Manual can be found here http://forums.freebsd.org/showthread.php?t=6134 #options VESA #options SC_PIXEL_MODE # Disable reboot on Ctrl Alt Del #options SC_DISABLE_REBOOT # Change normal|kernel messages color options SC_NORM_ATTR=(FG_GREEN|BG_BLACK) options SC_KERNEL_CONS_ATTR=(FG_YELLOW|BG_BLACK) # More scroll space options SC_HISTORY_SIZE=8192 # Adding hardware crypto device device crypto device cryptodev # Useful network interfaces device vlan device tap #Virtual Ethernet driver device gre #IP over IP tunneling device if_bridge #Bridge interface device pfsync #synchronization interface for PF device carp #Common Address Redundancy Protocol device enc #IPsec interface device lagg #Link aggregation interface device stf #IPv4-IPv6 port # Also for my notebook, but may be used with Opteron device amdtemp # Same for Intel processors device coretemp # man 4 cpuctl device cpuctl # CPU control pseudo-device # Support for ECMP. More than one route for destination # Works even with default route so one can use it as LB for two ISP # For now code is unstable and panics (panic: rtfree 2) on route deletions. #options RADIX_MPATH # Multicast routing #options MROUTING #options PIM # Debug & DTrace options KDB # Kernel debugger related code options KDB_TRACE # Print a stack trace for a panic options KDTRACE_FRAME # amd64-only(?) options KDTRACE_HOOKS # all architectures - enable general DTrace hooks #options DDB #options DDB_CTF # all architectures - kernel ELF linker loads CTF data # Adaptive spining in lockmgr (8.x+) # See http://www.mail-archive.com/[email protected]/msg10782.html options ADAPTIVE_LOCKMGRS # UTF-8 in console (8.x+) #options TEKEN_UTF8 # FreeBSD 8.1+ # Deadlock resolver thread # For additional information see http://www.mail-archive.com/[email protected]/msg18124.html # (FYI: "resolution" is panic so use with caution) #options DEADLKRES # Increase maximum size of Raw I/O and sendfile(2) readahead #options MAXPHYS=(1024*1024) #options MAXBSIZE=(1024*1024) # For scheduler debug enable following option. # Debug will be available via `kern.sched.stats` sysctl # For more information see http://svnweb.freebsd.org/base/head/sys/conf/NOTES?view=markup #options SCHED_STATS If you are tuning network for maximum performance you may wish to play with ifconfig options like: # You can list all capabilities via `ifconfig -m` ifconfig [-]rxcsum [-]txcsum [-]tso [-]lro mtu In case you've enabled DDB in kernel config, you should edit your /etc/ddb.conf and add something like this to enable automatic reboot (and textdump as bonus): script kdb.enter.panic=textdump set; capture on; show pcpu; bt; ps; alltrace; capture off; call doadump; reset script kdb.enter.default=textdump set; capture on; bt; ps; capture off; call doadump; reset And do not forget to add ddb_enable="YES" to /etc/rc.conf Since FreeBSD 9 you can select to enable/disable flowcontrol on your NIC: # See http://en.wikipedia.org/wiki/Ethernet_flow_control and # http://www.mail-archive.com/[email protected]/msg07927.html for additional info ifconfig bge0 media auto mediaopt flowcontrol PS. Also most of FreeBSD's limits can be monitored by # vmstat -z and # limits PPS. variety of network counters can be monitored via # netstat -s In FreeBSD-9 netstat's -Q option appeared, try following command to display netisr stats # netstat -Q PPPS. also see # man 7 tuning PPPPS. I wanted to thank FreeBSD community, especially author of nginx - Igor Sysoev, nginx-ru@ and FreeBSD-performance@ mailing lists for providing useful information about FreeBSD tuning. FreeBSD WIP * Whats cooking for FreeBSD 7? * Whats cooking for FreeBSD 8? * Whats cooking for FreeBSD 9? So here is the question: What tunings are you using on yours FreeBSD servers? You can also post your /etc/sysctl.conf, /boot/loader.conf, kernel options, etc with description of its' meaning (do not copy-paste from sysctl -d). Don't forget to specify server type (web, smb, gateway, etc) Let's share experience!

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • MAMP + Python MySQLDB - trouble installing

    - by Frederico
    I'm currently running the latest version of MAMP on my Snow Leopard OSX, and I'm trying to install MySQLDB. Downloaded: MySQL-python-1.2.3c1 I went into the setup_posix.py and adjusted the location of the mysql_config to the one in MAMP: mysql_config.path = "/Applications/MAMP/Library/bin/mysql_config" When trying to build I get the error below. Could anyone give me a hand please: creating build/temp.macosx-10.6-universal-2.6 gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -Dversion_info=(1,2,3,'gamma',1) -D_version_=1.2.3c1 -I/Applications/MAMP/Library/include/mysql -I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.6-universal-2.6/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:672: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_clear’: _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:680: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:681: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_close’: _mysql.c:696: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:698: warning: implicit declaration of function ‘mysql_close’ _mysql.c:698: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:700: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_affected_rows’: _mysql.c:722: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:723: warning: implicit declaration of function ‘mysql_affected_rows’ _mysql.c:723: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_debug’: _mysql.c:739: warning: implicit declaration of function ‘mysql_debug’ _mysql.c: In function ‘_mysql_ConnectionObject_dump_debug_info’: _mysql.c:757: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:759: warning: implicit declaration of function ‘mysql_dump_debug_info’ _mysql.c:759: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_autocommit’: _mysql.c:783: warning: implicit declaration of function ‘mysql_query’ _mysql.c:783: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_commit’: _mysql.c:806: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_rollback’: _mysql.c:828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_errno’: _mysql.c:940: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:941: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_error’: _mysql.c:956: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:957: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:957: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_escape_string’: _mysql.c:981: warning: implicit declaration of function ‘mysql_escape_string’ _mysql.c: In function ‘_mysql_escape’: _mysql.c:1088: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_describe’: _mysql.c:1168: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1168: error: ‘fields’ undeclared (first use in this function) _mysql.c:1171: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1172: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1173: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1184: warning: implicit declaration of function ‘IS_NOT_NULL’ _mysql.c: In function ‘_mysql_ResultObject_field_flags’: _mysql.c:1204: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1204: error: ‘fields’ undeclared (first use in this function) _mysql.c:1207: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1208: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1209: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:1250: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_tuple’: _mysql.c:1256: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: implicit declaration of function ‘mysql_fetch_lengths’ _mysql.c:1258: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1258: warning: assignment makes pointer from integer without a cast _mysql.c:1261: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1262: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1275: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict’: _mysql.c:1280: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1280: error: ‘fields’ undeclared (first use in this function) _mysql.c:1282: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1284: warning: assignment makes pointer from integer without a cast _mysql.c:1285: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1288: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1289: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1314: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘_mysql_row_to_dict_old’: _mysql.c:1319: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:1319: error: ‘fields’ undeclared (first use in this function) _mysql.c:1321: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1323: warning: assignment makes pointer from integer without a cast _mysql.c:1324: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1327: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:1328: error: ‘row’ undeclared (first use in this function) _mysql.c: At top level: _mysql.c:1350: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c: In function ‘mysql_fetch_row’: _mysql.c:1361: error: ‘MYSQL_ROW’ undeclared (first use in this function) _mysql.c:1361: error: expected ‘;’ before ‘row’ _mysql.c:1365: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1366: error: ‘row’ undeclared (first use in this function) _mysql.c:1366: warning: implicit declaration of function ‘mysql_fetch_row’ _mysql.c:1366: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1369: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:1372: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1380: error: too many arguments to function ‘convert_row’ _mysql.c: In function ‘_mysql_ResultObject_fetch_row’: _mysql.c:1404: error: expected declaration specifiers or ‘...’ before ‘MYSQL_ROW’ _mysql.c:1419: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1431: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:1445: warning: implicit declaration of function ‘mysql_num_rows’ _mysql.c:1445: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_character_set_name’: _mysql.c:1512: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_get_client_info’: _mysql.c:1603: warning: implicit declaration of function ‘mysql_get_client_info’ _mysql.c:1603: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_host_info’: _mysql.c:1617: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1618: warning: implicit declaration of function ‘mysql_get_host_info’ _mysql.c:1618: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1618: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_get_proto_info’: _mysql.c:1632: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1633: warning: implicit declaration of function ‘mysql_get_proto_info’ _mysql.c:1633: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_get_server_info’: _mysql.c:1647: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1648: warning: implicit declaration of function ‘mysql_get_server_info’ _mysql.c:1648: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1648: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_info’: _mysql.c:1664: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1665: warning: implicit declaration of function ‘mysql_info’ _mysql.c:1665: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1665: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_insert_id’: _mysql.c:1697: error: ‘my_ulonglong’ undeclared (first use in this function) _mysql.c:1697: error: expected ‘;’ before ‘r’ _mysql.c:1699: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1701: error: ‘r’ undeclared (first use in this function) _mysql.c:1701: warning: implicit declaration of function ‘mysql_insert_id’ _mysql.c:1701: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_kill’: _mysql.c:1718: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1720: warning: implicit declaration of function ‘mysql_kill’ _mysql.c:1720: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_field_count’: _mysql.c:1739: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1741: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_num_fields’: _mysql.c:1756: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1757: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_num_rows’: _mysql.c:1772: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1773: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_ping’: _mysql.c:1802: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1803: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1805: warning: implicit declaration of function ‘mysql_ping’ _mysql.c:1805: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_query’: _mysql.c:1826: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1828: warning: implicit declaration of function ‘mysql_real_query’ _mysql.c:1828: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_select_db’: _mysql.c:1856: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1858: warning: implicit declaration of function ‘mysql_select_db’ _mysql.c:1858: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_shutdown’: _mysql.c:1877: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1879: warning: implicit declaration of function ‘mysql_shutdown’ _mysql.c:1879: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_stat’: _mysql.c:1904: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1906: warning: implicit declaration of function ‘mysql_stat’ _mysql.c:1906: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:1906: warning: assignment makes pointer from integer without a cast _mysql.c: In function ‘_mysql_ConnectionObject_store_result’: _mysql.c:1927: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1928: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1937: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_thread_id’: _mysql.c:1966: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1968: warning: implicit declaration of function ‘mysql_thread_id’ _mysql.c:1968: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ConnectionObject_use_result’: _mysql.c:1988: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:1989: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:1998: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ConnectionObject_dealloc’: _mysql.c:2016: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_repr’: _mysql.c:2028: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2029: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c: In function ‘_mysql_ResultObject_data_seek’: _mysql.c:2047: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2048: warning: implicit declaration of function ‘mysql_data_seek’ _mysql.c:2048: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_seek’: _mysql.c:2061: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2061: error: expected ‘;’ before ‘r’ _mysql.c:2063: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2064: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2069: error: ‘r’ undeclared (first use in this function) _mysql.c:2069: warning: implicit declaration of function ‘mysql_row_tell’ _mysql.c:2069: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2070: warning: implicit declaration of function ‘mysql_row_seek’ _mysql.c:2070: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_row_tell’: _mysql.c:2082: error: ‘MYSQL_ROW_OFFSET’ undeclared (first use in this function) _mysql.c:2082: error: expected ‘;’ before ‘r’ _mysql.c:2084: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2085: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:2090: error: ‘r’ undeclared (first use in this function) _mysql.c:2090: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:2091: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: In function ‘_mysql_ResultObject_dealloc’: _mysql.c:2099: warning: implicit declaration of function ‘mysql_free_result’ _mysql.c:2099: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c: At top level: _mysql.c:2330: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:2337: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:2344: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2351: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2358: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:2421: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:2421: error: initializer element is not constant _mysql.c:2421: error: (near initialization for ‘_mysql_ResultObject_memberlist[0].offset’) _mysql.c: In function ‘_mysql_ConnectionObject_getattr’: _mysql.c:2443: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:36:23: error: my_config.h: No such file or directory _mysql.c:38:19: error: mysql.h: No such file or directory _mysql.c:39:26: error: mysqld_error.h: No such file or directory _mysql.c:40:20: error: errmsg.h: No such file or directory _mysql.c:76: error: expected specifier-qualifier-list before ‘MYSQL’ _mysql.c:90: error: expected specifier-qualifier-list before ‘MYSQL_RES’ _mysql.c: In function ‘_mysql_Exception’: _mysql.c:120: warning: implicit declaration of function ‘mysql_errno’ _mysql.c:120: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:123: error: ‘CR_MAX_ERROR’ undeclared (first use in this function) _mysql.c:123: error: (Each undeclared identifier is reported only once _mysql.c:123: error: for each function it appears in.) _mysql.c:131: error: ‘CR_COMMANDS_OUT_OF_SYNC’ undeclared (first use in this function) _mysql.c:132: error: ‘ER_DB_CREATE_EXISTS’ undeclared (first use in this function) _mysql.c:133: error: ‘ER_SYNTAX_ERROR’ undeclared (first use in this function) _mysql.c:134: error: ‘ER_PARSE_ERROR’ undeclared (first use in this function) _mysql.c:135: error: ‘ER_NO_SUCH_TABLE’ undeclared (first use in this function) _mysql.c:136: error: ‘ER_WRONG_DB_NAME’ undeclared (first use in this function) _mysql.c:137: error: ‘ER_WRONG_TABLE_NAME’ undeclared (first use in this function) _mysql.c:138: error: ‘ER_FIELD_SPECIFIED_TWICE’ undeclared (first use in this function) _mysql.c:139: error: ‘ER_INVALID_GROUP_FUNC_USE’ undeclared (first use in this function) _mysql.c:140: error: ‘ER_UNSUPPORTED_EXTENSION’ undeclared (first use in this function) _mysql.c:141: error: ‘ER_TABLE_MUST_HAVE_COLUMNS’ undeclared (first use in this function) _mysql.c:170: error: ‘ER_DUP_ENTRY’ undeclared (first use in this function) _mysql.c:213: warning: implicit declaration of function ‘mysql_error’ _mysql.c:213: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:213: warning: passing argument 1 of ‘PyString_FromString’ makes pointer from integer without a cast _mysql.c: In function ‘_mysql_server_init’: _mysql.c:308: warning: label ‘finish’ defined but not used _mysql.c:234: warning: unused variable ‘item’ _mysql.c:233: warning: unused variable ‘groupc’ _mysql.c:233: warning: unused variable ‘i’ _mysql.c:233: warning: unused variable ‘cmd_argc’ _mysql.c:232: warning: unused variable ‘s’ _mysql.c: In function ‘_mysql_ResultObject_Initialize’: _mysql.c:363: error: ‘MYSQL_RES’ undeclared (first use in this function) _mysql.c:363: error: ‘result’ undeclared (first use in this function) _mysql.c:368: error: ‘MYSQL_FIELD’ undeclared (first use in this function) _mysql.c:368: error: ‘fields’ undeclared (first use in this function) _mysql.c:377: error: ‘_mysql_ResultObject’ has no member named ‘use’ _mysql.c:380: warning: implicit declaration of function ‘mysql_use_result’ _mysql.c:380: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:382: warning: implicit declaration of function ‘mysql_store_result’ _mysql.c:382: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:383: error: ‘_mysql_ResultObject’ has no member named ‘result’ _mysql.c:386: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:389: warning: implicit declaration of function ‘mysql_num_fields’ _mysql.c:390: error: ‘_mysql_ResultObject’ has no member named ‘nfields’ _mysql.c:391: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:392: warning: implicit declaration of function ‘mysql_fetch_fields’ _mysql.c:438: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_traverse’: _mysql.c:450: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:451: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ResultObject_clear’: _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:462: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c:463: error: ‘_mysql_ResultObject’ has no member named ‘converter’ _mysql.c: In function ‘_mysql_ConnectionObject_Initialize’: _mysql.c:475: error: ‘MYSQL’ undeclared (first use in this function) _mysql.c:475: error: ‘conn’ undeclared (first use in this function) _mysql.c:500: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:501: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c:525: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:547: warning: implicit declaration of function ‘mysql_init’ _mysql.c:547: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: warning: implicit declaration of function ‘mysql_options’ _mysql.c:550: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:550: error: ‘MYSQL_OPT_CONNECT_TIMEOUT’ undeclared (first use in this function) _mysql.c:554: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:554: error: ‘MYSQL_OPT_COMPRESS’ undeclared (first use in this function) _mysql.c:555: error: ‘CLIENT_COMPRESS’ undeclared (first use in this function) _mysql.c:558: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:558: error: ‘MYSQL_OPT_NAMED_PIPE’ undeclared (first use in this function) _mysql.c:560: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:560: error: ‘MYSQL_INIT_COMMAND’ undeclared (first use in this function) _mysql.c:562: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:562: error: ‘MYSQL_READ_DEFAULT_FILE’ undeclared (first use in this function) _mysql.c:564: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:564: error: ‘MYSQL_READ_DEFAULT_GROUP’ undeclared (first use in this function) _mysql.c:567: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:567: error: ‘MYSQL_OPT_LOCAL_INFILE’ undeclared (first use in this function) _mysql.c:575: warning: implicit declaration of function ‘mysql_real_connect’ _mysql.c:575: error: ‘_mysql_ConnectionObject’ has no member named ‘connection’ _mysql.c:590: error: ‘_mysql_ConnectionObject’ has no member named ‘open’ _mysql.c: In function ‘_mysql_ConnectionObject_traverse’: _mysql.c:671: error: ‘_mysql_ConnectionObject’ has no member named ‘converter’ _mysql.c:

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

< Previous Page | 688 689 690 691 692 693 694 695 696 697 698 699  | Next Page >