Search Results

Search found 15169 results on 607 pages for 'virtual attribute'.

Page 593/607 | < Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >

  • parentNode.parentNode.rowindex to delete a row in a dynamic table

    - by billy85
    I am creating my rows dynamically when the user clicks on "Ajouter". <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <script> function getXhr(){ var xhr = null; if(window.XMLHttpRequest) // Firefox and others xhr = new XMLHttpRequest(); else if(window.ActiveXObject){ // Internet Explorer try { xhr = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { xhr = new ActiveXObject("Microsoft.XMLHTTP"); } } else { // XMLHttpRequest not supported by your browser alert("Votre navigateur ne supporte pas les objets XMLHTTPRequest..."); xhr = false; } return xhr } /** * method called when the user clicks on the button */ function go(){ var xhr = getXhr() // We defined what we gonna do with the response xhr.onreadystatechange = function(){ // We do somthing once the server's response is OK if(xhr.readyState == 4 && xhr.status == 200){ //alert(xhr.responseText); var body = document.getElementsByTagName("body")[0]; // Retrieve <table> ID and create a <tbody> element var tbl = document.getElementById("table"); var tblBody = document.createElement("tbody"); var row = document.createElement("tr"); // Create <td> elements and a text node, make the text // node the contents of the <td>, and put the <td> at // the end of the table row var cell_1 = document.createElement("td"); var cell_2 = document.createElement("td"); var cell_3 = document.createElement("td"); var cell_4 = document.createElement("td"); // Create the first cell which is a text zone var cell1=document.createElement("input"); cell1.type="text"; cell1.name="fname"; cell1.size="20"; cell1.maxlength="50"; cell_1.appendChild(cell1); // Create the second cell which is a text area var cell2=document.createElement("textarea"); cell2.name="fdescription"; cell2.rows="2"; cell2.cols="30"; cell_2.appendChild(cell2); var cell3 = document.createElement("div"); cell3.innerHTML=xhr.responseText; cell_3.appendChild(cell3); // Create the fourth cell which is a href var cell4 = document.createElement("a"); cell4.appendChild(document.createTextNode("[Delete]")); cell4.setAttribute("href","javascrit:deleteRow();"); cell_4.appendChild(cell4); // add cells to the row row.appendChild(cell_1); row.appendChild(cell_2); row.appendChild(cell_3); row.appendChild(cell_4); // add the row to the end of the table body tblBody.appendChild(row); // put the <tbody> in the <table> tbl.appendChild(tblBody); // appends <table> into <body> body.appendChild(tbl); // sets the border attribute of tbl to 2; tbl.setAttribute("border", "1"); } } xhr.open("GET","fstatus.php",true); xhr.send(null); } </head> <body > <h1> Create an Item </h1> <form method="post"> <table align="center" border = "2" cellspacing ="0" cellpadding="3" id="table"> <tr><td><b>Functionality Name:</b></td> <td><b>Description:</b></td> <td><b>Status:</b></td> <td><input type="button" Name= "Ajouter" Value="Ajouter" onclick="go()"></td></tr> </table> </form> </body> </html> Now, I would like to use the href [Delete] to delete one particular row. I wrote this: <script type="text/javascript"> function deleteRow(r){ var i=r.parentNode.parentNode.rowIndex; document.getElementById('table').deleteRow(i); } </script> When I change the first code like this: cell4.setAttribute("href","javascrit:deleteRow(this);"); I got an error: The page cannot be displayed. I am redirected to a new pagewhich can not be displayed. How could I delete my row by using the function deleteRow(r)? table is the id of my table Thanks. Billy85

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • Help needed with Javascript Variable Scope / OOP and Call Back Functions

    - by gargantaun
    I think this issue goes beyond typical variable scope and closure stuff, or maybe I'm an idiot. Here goes anyway... I'm creating a bunch of objects on the fly in a jQuery plugin. The object look something like this function WedgePath(canvas){ this.targetCanvas = canvas; this.label; this.logLabel = function(){ console.log(this.label) } } the jQuery plugin looks something like this (function($) { $.fn.myPlugin = function() { return $(this).each(function() { // Create Wedge Objects for(var i = 1; i <= 30; i++){ var newWedge = new WedgePath(canvas); newWedge.label = "my_wedge_"+i; globalFunction(i, newWedge]); } }); } })(jQuery); So... the plugin creates a bunch of wedgeObjects, then calls 'globalFunction' for each one, passing in the latest WedgePath instance. Global function looks like this. function globalFunction(indicator_id, pWedge){ var targetWedge = pWedge; targetWedge.logLabel(); } What happens next is that the console logs each wedges label correctly. However, I need a bit more complexity inside globalFunction. So it actually looks like this... function globalFunction(indicator_id, pWedge){ var targetWedge = pWedge; someSql = "SELECT * FROM myTable WHERE id = ?"; dbInterface.executeSql(someSql, [indicator_id], function(transaction, result){ targetWedge.logLabel(); }) } There's a lot going on here so i'll explain. I'm using client side database storage (WebSQL i call it). 'dbInterface' an instance of a simple javascript object I created which handles the basics of interacting with a client side database [shown at the end of this question]. the executeSql method takes up to 4 arguments The SQL String an optional arguments array an optional onSuccess handler an optional onError handler (not used in this example) What I need to happen is: When the WebSQL query has completed, it takes some of that data and manipulates some attribute of a particular wedge. But, when I call 'logLabel' on an instance of WedgePath inside the onSuccess handler, I get the label of the very last instance of WedgePath that was created way back in the plugin code. Now I suspect that the problem lies in the var newWedge = new WedgePath(canvas); line. So I tried pushing each newWedge into an array, which I thought would prevent that line from replacing or overwriting the WedgePath instance at every iteration... wedgeArray = []; // Inside the plugin... for(var i = 1; i <= 30; i++){ var newWedge = new WedgePath(canvas); newWedge.label = "my_wedge_"+i; wedgeArray.push(newWedge); } for(var i = 0; i < wedgeArray.length; i++){ wedgeArray[i].logLabel() } But again, I get the last instance of WedgePath to be created. This is driving me nuts. I apologise for the length of the question but I wanted to be as clear as possible. END ============================================================== Also, here's the code for dbInterface object should it be relevant. function DatabaseInterface(db){ var DB = db; this.sql = function(sql, arr, pSuccessHandler, pErrorHandler){ successHandler = (pSuccessHandler) ? pSuccessHandler : this.defaultSuccessHandler; errorHandler = (pErrorHandler) ? pErrorHandler : this.defaultErrorHandler; DB.transaction(function(tx){ if(!arr || arr.length == 0){ tx.executeSql(sql, [], successHandler, errorHandler); }else{ tx.executeSql(sql,arr, successHandler, errorHandler) } }); } // ---------------------------------------------------------------- // A Default Error Handler // ---------------------------------------------------------------- this.defaultErrorHandler = function(transaction, error){ // error.message is a human-readable string. // error.code is a numeric error code console.log('WebSQL Error: '+error.message+' (Code '+error.code+')'); // Handle errors here var we_think_this_error_is_fatal = true; if (we_think_this_error_is_fatal) return true; return false; } // ---------------------------------------------------------------- // A Default Success Handler // This doesn't do anything except log a success message // ---------------------------------------------------------------- this.defaultSuccessHandler = function(transaction, results) { console.log("WebSQL Success. Default success handler. No action taken."); } }

    Read the article

  • XML parsing and transforming (XSLT or otherwise)

    - by observ
    I have several xml files that are formated this way: <ROOT> <OBJECT> <identity> <id>123</id> </identity> <child2 attr = "aa">32</child2> <child3> <childOfChild3 att1="aaa" att2="bbb" att3="CCC">LN</childOfChild3> </child3> <child4> <child5> <child6>3ddf</child6> <child7> <childOfChild7 att31="RR">1231</childOfChild7> </child7> </child5> </child4> </OBJECT> <OBJECT> <identity> <id>124</id> </identity> <child2 attr = "bb">212</child2> <child3> <childOfChild3 att1="ee" att2="ccc" att3="EREA">OP</childOfChild3> </child3> <child4> <child5> <child6>213r</child6> <child7> <childOfChild7 att31="EE">1233</childOfChild7> </child7> </child5> </child4> </OBJECT> </ROOT> How can i format it this way?: <ROOT> <OBJECT> <id>123</id> <child2>32</child2> <attr>aa</attr> <child3></child3> <childOfChild3>LN</childOfChild3> <att1>aaa</att1> <att2>bbb</att2> <att3>CCC</att3> <child4></child4> <child5></child5> <child6>3ddf</child6> <child7></child7> <childOfChild7>1231</childOfChild7> <att31>RR</att31> </OBJECT> <OBJECT> <id>124</id> <child2>212</child2> <attr>bb</attr> <child3></child3> <childOfChild3>LN</childOfChild3> <att1>ee</att1> <att2>ccc</att2> <att3>EREA</att3> <child4></child4> <child5></child5> <child6>213r</child6> <child7></child7> <childOfChild7>1233</childOfChild7> <att31>EE</att31> </OBJECT> </ROOT> I know some C# so maybe a parser there? or some generic xslt? The xml files are some data received from a client, so i can't control the way they are sending it to me. L.E. Basically when i am trying to test this data in excel (for example i want to make sure that the attribute of childOfChild7 corresponds to the correct identity id) i am getting a lot of blank spaces. If i am importing in access to get only the data i want out, i have to do a thousands subqueries to get them all in a nice table. Basically i just want to see for one Object all its data (one object - One row) and then just delete/hide the columns i don't need.

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • VB.NET - using textfile as source for menus and textboxes

    - by Kenny Bones
    Hi, this is probably a bit tense and I'm not sure if this is possible at all. But basically, I'm trying to create a small application which contains alot of PowerShell-code which I want to run in an easy matter. I've managed to create everything myself and it does work. But all of the PowerShell code is manually hardcoded and this gives me a huge disadvantage. What I was thinking was creating some sort of dynamic structure where I can read a couple of text files (possible a numerous amount of text files) and use these as the source for both the comboboxes and the richtextbox which provovides as the string used to run in PowerShell. I was thinking something like this: Combobox - "Choose cmdlet" - Use "menucode.txt" as source Richtextbox - Use "code.txt" as source But, the thing is, Powershell snippets need a few arguments in order for them to work. So I've got a couple of comboboxes and a few textboxes which provides as input for these arguments. And this is done manually as it is right now. So rewriting this small application should also search the textfile for some keywords and have the comboboxes and textboxes to replace those keywords. And I'm not sure how to do this. So, would this requre a whole lot of textfiles? Or could I use one textfile and separate each PowerShell cmdlet snippets with something? Like some sort of a header? Right now, I've got this code at the eventhandler (ComboBox_SelectedIndexChanged) If ComboBoxFunksjon.Text = "Set attribute" Then TxtBoxUsername.Visible = True End If If chkBoxTextfile.Checked = True Then If txtboxBrowse.Text = "" Then MsgBox("You haven't choses a textfile as input for usernames") End If LabelAttribute.Visible = True LabelUsername.Visible = False ComboBoxAttribute.Visible = True TxtBoxUsername.Visible = False txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-QADUser -Identity $a -ObjectAttributes @{" & ComboBoxAttribute.SelectedItem & "='" & TxtBoxValue.Text & "'}" & vbCrLf & "}" If ComboBoxAttribute.SelectedItem = "Outlook WebAccess" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "OWA Enabled?" txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-CASMailbox -Identity $a -OWAEnabled" & " " & "$" & CheckBoxValue.Checked & " '}" & vbCrLf & "}" End If If ComboBoxAttribute.SelectedItem = "MobileSync" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "MobileSync Enabled?" Dim value If CheckBoxValue.Checked = True Then value = "0" Else value = "7" End If txtBoxCode.Text = "$users = Get-Content " & txtboxBrowse.Text & vbCrLf & "foreach ($a in $users)" & vbCrLf & "{" & vbCrLf & "Set-QADUser -Identity $a -ObjectAttributes @{msExchOmaAdminWirelessEnable='" & value & " '}" & vbCrLf & "}" End If Else LabelAttribute.Visible = True LabelUsername.Visible = True ComboBoxAttribute.Visible = True txtBoxCode.Text = "Set-QADUser -Identity " & TxtBoxUsername.Text & " -ObjectAttributes @{" & ComboBoxAttribute.SelectedItem & "='" & TxtBoxValue.Text & " '}" If ComboBoxAttribute.SelectedItem = "Outlook WebAccess" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "OWA Enabled?" txtBoxCode.Text = "Set-CASMailbox " & TxtBoxUsername.Text & " -OWAEnabled " & "$" & CheckBoxValue.Checked End If If ComboBoxAttribute.SelectedItem = "MobileSync" Then TxtBoxValue.Visible = False CheckBoxValue.Visible = True CheckBoxValue.Text = "MobileSync Enabled?" Dim value If CheckBoxValue.Checked = True Then value = "0" Else value = "7" End If txtBoxCode.Text = "Set-QADUser " & TxtBoxUsername.Text & " -ObjectAttributes @{msExchOmaAdminWirelessEnable='" & value & "'}" End If End If Now, this snippet above lets me either use a text file as a source for each username used in the powershell snippet. Just so you know :) And I know, this is probably coded as stupidly as it gets. But it does work! :)

    Read the article

  • Core Data: Fetch all entities in a to-many-relationship of a particular object?

    - by Björn Marschollek
    Hi there, in my iPhone application I am using simple Core Data Model with two entities (Item and Property): Item name properties Property name value item Item has one attribute (name) and one one-to-many-relationship (properties). Its inverse relationship is item. Property has two attributes the according inverse relationship. Now I want to show my data in table views on two levels. The first one lists all items; when one row is selected, a new UITableViewController is pushed onto my UINavigationController's stack. The new UITableView is supposed to show all properties (i.e. their names) of the selected item. To achieve this, I use a NSFetchedResultsController stored in an instance variable. On the first level, everything works fine when setting up the NSFetchedResultsController like this: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all item objects. NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Item" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; [fetch setFetchBatchSize:10]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } However, on the second-level UITableView, I seem to do something wrong. I implemented the fetchedresultsController in a similar way: -(NSFetchedResultsController *) fetchedResultsController { if (fetchedResultsController) return fetchedResultsController; // goal: tell the FRC to fetch all property objects that belong to the previously selected item NSFetchRequest *fetch = [[NSFetchRequest alloc] init]; // fetch all Property entities. NSEntityDescription *entity = [NSEntityDescription entityForName:@"Property" inManagedObjectContext:self.moContext]; [fetch setEntity:entity]; // limit to those entities that belong to the particular item NSPredicate *predicate = [NSPredicate predicateWithFormat:[NSString stringWithFormat:@"item.name like '%@'",self.item.name]]; [fetch setPredicate:predicate]; // sort it. Boring. NSSortDescriptor *sort = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; [fetch setSortDescriptors:[NSArray arrayWithObject:sort]]; NSError *error = nil; NSLog(@"%d entities found.",[self.moContext countForFetchRequest:fetch error:&error]); // logs "3 entities found."; I added those properties before. See below for my saving "problem". if (error) NSLog("%@",error); // no error, thus nothing logged. [fetch setFetchBatchSize:20]; NSFetchedResultsController *frController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetch managedObjectContext:self.moContext sectionNameKeyPath:nil cacheName:@"cache"]; self.fetchedResultsController = frController; fetchedResultsController.delegate = self; [sort release]; [frController release]; [fetch release]; return fetchedResultsController; } Now it's getting weird. The above NSLog statement returns me the correct number of properties for the selected item. However, the UITableViewDelegate method tells me that there are no properties: -(NSInteger) tableView:(UITableView *)table numberOfRowsInSection:(NSInteger)section { id <NSFetchedResultsSectionInfo> sectionInfo = [[self.fetchedResultsController sections] objectAtIndex:section]; NSLog(@"Found %d properties for item \"%@\". Should have found %d.",[sectionInfo numberOfObjects], self.item.name, [self.item.properties count]); // logs "Found 0 properties for item "item". Should have found 3." return [sectionInfo numberOfObjects]; } The same implementation works fine on the first level. It's getting even weirder. I implemented some kind of UI to add properties. I create a new Property instance via Property *p = [NSEntityDescription insertNewObjectForEntityForName:@"Property" inManagedObjectContext:self.moContext];, set up the relationships and call [self.moContext save:&error]. This seems to work, as error is still nil and the object gets saved (I can see the number of properties when logging the Item instance, see above). However, the delegate methods are not fired. This seems to me due to the possibly messed up fetchRequest(Controller). Any ideas? Did I mess up the second fetch request? Is this the right way to fetch all entities in a to-many-relationship for a particular instance at all?

    Read the article

  • Login or Register (Ruby on rails)

    - by DanielZ
    Hello stackoverflow, I'm working on an Ruby on Rails application (2.3.x) and i want to make a form that lets the user login or register. I want to do this in the same form. I have a JS function that replaces the form elements like this: Login form: <% form_for @user do |f| %> <div id="form"> <%= f.label :email, "E-mail" %> <%= f.text_field :email %> <%= f.label :password, "Password" %> <%= f.password_field :password %> <%= link_to "I don't have an account, "#", :id => "changeForm"%> <%= f.submit "Login" %> </div> <% end %> The id "changeForm" triggers a JS function that changes the form elements. So if you press the url the html looks like this: <% form_for @user do |f| %> <div id="form"> <%= f.label :name, "Name" %> <%= f.text_field :name %> <%= f.label :email, "E-mail" %> <%= f.text_field :email %> <%= f.label :password, "Password" %> <%= f.password_field :password %> <%= f.label :password_confirmation, "Password confirmation" %> <%= f.password_field :password_confirmation %> <%= link_to "I do have an account, "#", :id => "changeForm"%> <%= f.submit "Register" %> </div> <% end %> I added the neccesary validations to my user model: class User < ActiveRecord::Base attr_reader :password validates_presence_of :name, :email, :password validates_format_of :email, :with => /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i validates_confirmation_of :password But what happens when you fill in the email / password you get the errors that the name is missing and that the password fields aren't confirmed. So i could do some nasty programming in my user model like this: #if password_conf or the name are present the user has tried to register... if params[:user][:password_confirmation].present? || params[:user][:name].present? #so we'll try to save the user if @user.save #if the user is saved authenticate the user current_session.user = User.authenticate(params[:user]) #if the user is logged in? if current_session.user.present? flash[:notice] = "succesvully logged redirect_to some_routes_path else #not logged in... flash[:notice] = "Not logged in" render :action => "new" end else #user not saved render :action => "new" end else #So if the params[:user][:password_confirmation] or [:user][:name] weren't present we geuss the user wants to login... current_session.user = User.authenticate(params[:user]) #are we logged_in? if current_session.user.present? flash[:notice] = "Succesvully logged in" redirect_to some_routes_path else #errors toevoegen @user.errors.add(:email, "The combination of email/password isn't valid") @user.errors.add(:password," ") render :action => "new" end end end Without validations this (imho dirty code and should not be in the controller) works. But i want to use the validates_presence_of methods and i don't want to slap the "conventions over configurations" in the face. So another thing i have tried is adding a hidden field to the form: #login form <%= f.hidden_field :login, :value => true %> # and ofcourse set it to false if we want to register. And then i wanted to use the method: before_validation before_validation_on_create do |user| if params[:user].login == true #throws an error i know... validates_presence_of :email, :password validates_format_of :email, :with => /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i else validates_presence_of :name, :email, :password validates_format_of :email, :with => /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i validates_confirmation_of :password end end But this doesn't work because i can't access the params. And login isn't a attribute for the user object. But i thought that in this way i could validate the email and password params if the user wants to login. And all the other attrs if the user want to register. So all i could think of doesn't work how i want it to work. So my main goal is this: 1 form for login/register with the use of the validation methods in the user model. So if we want to login but don't fill in any information = give validation errors. And if the user wants to login but the email/password combination doens't match give the "@user.errors.add(:email, "the combination wasn't found in the db...")". And the same goes for user register... Thanks in advance!

    Read the article

  • Stacked up with web service configuration

    - by Allan Chua
    I'm currently stacked with the web service that im creating right now. when Testing it in local it all works fine but when I try to deploy it to the web server it throws me the following error An error occurred while trying to make a request to URI '...my web service URI here....'. This could be due to attempting to access a service in a cross-domain way without a proper cross-domain policy in place, or a policy that is unsuitable for SOAP services. You may need to contact the owner of the service to publish a cross-domain policy file and to ensure it allows SOAP-related HTTP headers to be sent. This error may also be caused by using internal types in the web service proxy without using the InternalsVisibleToAttribute attribute. Please see the inner exception for more details. here is my web config. <?xml version="1.0"?> <configuration> <configSections> </configSections> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> </modules> <validation validateIntegratedModeConfiguration="false" /> <security> <requestFiltering> <requestLimits maxAllowedContentLength="2000000000" /> </requestFiltering> </security> </system.webServer> <connectionStrings> <add name="........" providerName="System.Data.SqlClient" /> </connectionStrings> <appSettings> <!-- Testing --> <add key="DataConnectionString" value="..........." /> </appSettings> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".rdlc" type="Microsoft.Reporting.RdlBuildProvider, Microsoft.ReportViewer.WebForms, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> </buildProviders> </compilation> <httpRuntime executionTimeout="1200" maxRequestLength="2000000" /> </system.web> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> <behaviors> <serviceBehaviors> <behavior name="Service1"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="nextSPOTServiceBehavior"> <serviceMetadata httpsGetEnabled="true"/> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2000000000" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <basicHttpBinding> <binding name="SecureBasic" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> <readerQuotas maxArrayLength="2000000" maxStringContentLength="2000000"/> </binding> <binding name="BasicHttpBinding_IDownloadManagerService" closeTimeout="00:10:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647"> <security mode="Transport" /> </binding> </basicHttpBinding> </bindings> <services> <service behaviorConfiguration="nextSPOTServiceBehavior" name="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.DownloadManagerService"> <endpoint binding="basicHttpBinding" bindingConfiguration="SecureBasic" name="basicHttpSecure" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <!--<endpoint binding="basicHttpBinding" bindingConfiguration="" name="basicHttp" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" />--> <!--<endpoint address="" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IDownloadManagerService" contract="NextSPOTDownloadManagerWebServiceTester.Web.WebServices.IDownloadManagerService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" />--> </service> </services > </system.serviceModel> </configuration>

    Read the article

  • Can anyone tell me why my XML writer is not writing attributes?

    - by user1632018
    I am writing a parsing tool to help me clean up a large VC++ project before I make .net bindings for it. I am using an XML writer to read an xml file and write out each element to a new file. If an element with a certain name is found, then it executes some code and writes an output value into the elements value. So far it is almost working, except for one thing: It is not copying the attributes. Can anyone tell me why this is happening? Here is a sample of what it is supposed to copy/modify(Includes the attributes): <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build" ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <ItemGroup Label="ProjectConfigurations"> <ProjectConfiguration Include="Debug|Win32"> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration Include="Release|Win32"> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup Label="Globals"> <ProjectGuid>{57900E99-A405-49F4-83B2-0254117D041B}</ProjectGuid> <Keyword>Win32Proj</Keyword> <RootNamespace>libproj</RootNamespace> </PropertyGroup> Here is the output I am getting(No Attributes): <?xml version="1.0" encoding="utf-8"?> <Project> <ItemGroup> <ProjectConfiguration> <Configuration>Debug</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> <ProjectConfiguration> <Configuration>Release</Configuration> <Platform>Win32</Platform> </ProjectConfiguration> </ItemGroup> <PropertyGroup> <ProjectGuid>{57900E99-A405-49F4-83B2-0254117D041B}</ProjectGuid> <Keyword>Win32Proj</Keyword> <RootNamespace>libproj</RootNamespace> Here is my code currently. I have tried every way I can come up with to write the attributes. string baseDir = (textBox2.Text + "\\" + safeFileName); string vcName = Path.GetFileName(textBox1.Text); string vcProj = Path.Combine(baseDir, vcName); using (XmlReader reader = XmlReader.Create(textBox1.Text)) { XmlWriterSettings settings = new XmlWriterSettings(); settings.OmitXmlDeclaration = true; settings.ConformanceLevel = ConformanceLevel.Fragment; settings.Indent = true; settings.CloseOutput = false; using (XmlWriter writer = XmlWriter.Create(vcProj, settings)) { while (reader.Read()) { switch (reader.NodeType) { case XmlNodeType.Element: if (reader.Name == "ClInclude") { string include = reader.GetAttribute("Include"); string dirPath = Path.GetDirectoryName(textBox1.Text); Directory.SetCurrentDirectory(dirPath); string fullPath = Path.GetFullPath(include); //string dirPath = Path.GetDirectoryName(fullPath); copyFile(fullPath, 3); string filename = Path.GetFileName(fullPath); writer.WriteStartElement(reader.Name); writer.WriteAttributeString("Include", "include/" + filename); writer.WriteEndElement(); } else if (reader.Name == "ClCompile" && reader.HasAttributes) { string include = reader.GetAttribute("Include"); string dirPath = Path.GetDirectoryName(textBox1.Text); Directory.SetCurrentDirectory(dirPath); string fullPath = Path.GetFullPath(include); copyFile(fullPath, 2); string filename = Path.GetFileName(fullPath); writer.WriteStartElement(reader.Name); writer.WriteAttributeString("Include", "src/" + filename); writer.WriteEndElement(); } else { writer.WriteStartElement(reader.Name); } break; case XmlNodeType.Text: writer.WriteString(reader.Value); break; case XmlNodeType.XmlDeclaration: case XmlNodeType.ProcessingInstruction: writer.WriteProcessingInstruction(reader.Name, reader.Value); break; case XmlNodeType.Comment: writer.WriteComment(reader.Value); break; case XmlNodeType.Attribute: writer.WriteAttributes(reader, true); break; case XmlNodeType.EntityReference: writer.WriteEntityRef(reader.Value); break; case XmlNodeType.EndElement: writer.WriteFullEndElement(); break; } } } }

    Read the article

  • How to convert objects in reflection (Linq to XML)

    - by user829174
    I have the following code which is working fine, the code creates plugins in reflection via run time: using System; using System.Collections.Generic; using System.Linq; using System.IO; using System.Xml.Linq; using System.Reflection; using System.Text; namespace WindowsFormsApplication1 { public abstract class Plugin { public string Type { get; set; } public string Message { get; set; } } public class FilePlugin : Plugin { public string Path { get; set; } } public class RegsitryPlugin : Plugin { public string Key { get; set; } public string Name { get; set; } public string Value { get; set; } } static class MyProgram { [STAThread] static void Main(string[] args) { string xmlstr =@" <Client> <Plugin Type=""FilePlugin""> <Message>i am a file plugin</Message> <Path>c:\</Path> </Plugin> <Plugin Type=""RegsitryPlugin""> <Message>i am a registry plugin</Message> <Key>HKLM\Software\Microsoft</Key> <Name>Version</Name> <Value>3.5</Value> </Plugin> </Client> "; Assembly asm = Assembly.GetExecutingAssembly(); XDocument xDoc = XDocument.Load(new StringReader(xmlstr)); Plugin[] plugins = xDoc.Descendants("Plugin") .Select(plugin => { string typeName = plugin.Attribute("Type").Value; var type = asm.GetTypes().Where(t => t.Name == typeName).First(); Plugin p = Activator.CreateInstance(type) as Plugin; p.Type = typeName; foreach (var prop in plugin.Descendants()) { var pi = type.GetProperty(prop.Name.LocalName); object newVal = Convert.ChangeType(prop.Value, pi.PropertyType); pi.SetValue(p, newVal, null); } return p; }).ToArray(); // //"plugins" ready to use // } } } I am trying to modify the code and adding new class named DetailedPlugin so it will be able to read the following xml: string newXml = @" <Client> <Plugin Type=""FilePlugin""> <Message>i am a file plugin</Message> <Path>c:\</Path> <DetailedPlugin Type=""DetailedFilePlugin""> <Message>I am a detailed file plugin</Message> </DetailedPlugin> </Plugin> <Plugin Type=""RegsitryPlugin""> <Message>i am a registry plugin</Message> <Key>HKLM\Software\Microsoft</Key> <Name>Version</Name> <Value>3.5</Value> <DetailedPlugin Type=""DetailedRegsitryPlugin""> <Message>I am a detailed registry plugin</Message> </DetailedPlugin> </Plugin> </Client> "; for this i modified my classes to the following: public abstract class Plugin { public string Type { get; set; } public string Message { get; set; } public DetailedPlugin DetailedPlugin { get; set; } // new } public class FilePlugin : Plugin { public string Path { get; set; } } public class RegsitryPlugin : Plugin { public string Key { get; set; } public string Name { get; set; } public string Value { get; set; } } // new classes: public abstract class DetailedPlugin { public string Type { get; set; } public string Message { get; set; } } public class DetailedFilePlugin : Plugin { public string ExtraField1 { get; set; } } public class DetailedRegsitryPlugin : Plugin { public string ExtraField2{ get; set; } } from here i need some help to accomplish reading the xml and create the plugins with the nested DetailedPlugin

    Read the article

  • Why does decorating a class break the descriptor protocol, thus preventing staticmethod objects from behaving as expected?

    - by Robru
    I need a little bit of help understanding the subtleties of the descriptor protocol in Python, as it relates specifically to the behavior of staticmethod objects. I'll start with a trivial example, and then iteratively expand it, examining it's behavior at each step: class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" At this point, this behaves as expected, but what's going on here is a bit subtle: When you call Stub.do_things(), you are not invoking do_things directly. Instead, Stub.do_things refers to a staticmethod instance, which has wrapped the function we want up inside it's own descriptor protocol such that you are actually invoking staticmethod.__get__, which first returns the function that we want, and then gets called afterwards. >>> Stub <class __main__.Stub at 0x...> >>> Stub.do_things <function do_things at 0x...> >>> Stub.__dict__['do_things'] <staticmethod object at 0x...> >>> Stub.do_things() Doing things! So far so good. Next, I need to wrap the class in a decorator that will be used to customize class instantiation -- the decorator will determine whether to allow new instantiations or provide cached instances: def deco(cls): def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory @deco class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" Now, naturally this part as-is would be expected to break staticmethods, because the class is now hidden behind it's decorator, ie, Stub not a class at all, but an instance of factory that is able to produce instances of Stub when you call it. Indeed: >>> Stub <function factory at 0x...> >>> Stub.do_things Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'function' object has no attribute 'do_things' >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! So far I understand what's happening here. My goal is to restore the ability for staticmethods to function as you would expect them to, even though the class is wrapped. As luck would have it, the Python stdlib includes something called functools, which provides some tools just for this purpose, ie, making functions behave more like other functions that they wrap. So I change my decorator to look like this: def deco(cls): @functools.wraps(cls) def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory Now, things start to get interesting: >>> Stub <function Stub at 0x...> >>> Stub.do_things <staticmethod object at 0x...> >>> Stub.do_things() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'staticmethod' object is not callable >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! Wait.... what? functools copies the staticmethod over to the wrapping function, but it's not callable? Why not? What did I miss here? I was playing around with this for a bit and I actually came up with my own reimplementation of staticmethod that allows it to function in this situation, but I don't really understand why it was necessary or if this is even the best solution to this problem. Here's the complete example: class staticmethod(object): """Make @staticmethods play nice with decorated classes.""" def __init__(self, func): self.func = func def __call__(self, *args, **kwargs): """Provide the expected behavior inside decorated classes.""" return self.func(*args, **kwargs) def __get__(self, obj, objtype=None): """Re-implement the standard behavior for undecorated classes.""" return self.func def deco(cls): @functools.wraps(cls) def factory(*args, **kwargs): # pretend there is some logic here determining # whether to make a new instance or not return cls(*args, **kwargs) return factory @deco class Stub: @staticmethod def do_things(): """Call this like Stub.do_things(), with no arguments or instance.""" print "Doing things!" Indeed it works exactly as expected: >>> Stub <function Stub at 0x...> >>> Stub.do_things <__main__.staticmethod object at 0x...> >>> Stub.do_things() Doing things! >>> Stub() <__main__.Stub instance at 0x...> >>> Stub().do_things <function do_things at 0x...> >>> Stub().do_things() Doing things! What approach would you take to make a staticmethod behave as expected inside a decorated class? Is this the best way? Why doesn't the builtin staticmethod implement __call__ on it's own in order for this to just work without any fuss? Thanks.

    Read the article

  • how to define div or table cell height depending on the height of other divs or cells

    - by John
    I want to have a web page that contains 3 parts: A header at the top of the page , a footer (both of which having specific height in px)and the main part of the page which should be a div or table cell with the appropriate height attribute in order to take all the available space between them. I want the page to take 100% of the browser window height, trying to avoid scrollbars. The problems I have are the following: USING DIVs a) If I set the maindiv height to 100%, the page overflows and I get a vertical scrolbar. (the maindiv's height is set to the 100% of the browser window) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html{ height: 100%; max-height:100%; width: 100%; margin:0; padding:0; } div{padding:0;margin:0;} #containerdiv{height:100%;width:100%;background-color:#FF9;border:0;} #headerdiv{height:150px;width:100%;background-color:#0F0;border:0;} #footerdiv{height:50px;width:100%;background-color:#00F;border:0; } #maindiv{ background-color:#F00; height:100%; } div{border:#000 medium solid;border:0;} </style> <body> <div id="containerdiv"> <div id="headerdiv">headerdiv</div> <div id="maindiv">maindiv</div> <div id="footerdiv">footerdiv</div> </div> </body> </html> b) If I set the maindiv height to auto, the maindiv height is depending on it's content, which is not what I want. USING tables a) If I set the main cell height to 100% it works fine with Firefox but in Internet Explorer 8 I get a vertical scrollbar (you can use the next code block using th style="height:100%" instead of "auto" to see this.) b) If I set the main cell to auto it seems to be working both in IE and FF but then I have the problem that anything I put inside the maincell (table or div) cannot get maincell's full height in IE. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Untitled Document</title> </head> <style type="text/css"> <!-- body, html, table{ height: 100%; width: 100%; margin:0; padding:0; } table{border:#000 0px solid} </style> <body> <table style="background:#063" cellpadding="0" cellspacing="0" border="0"> <tr><th style="height:150px;background-color:#FF0"></th></tr> <tr> <th style="height:auto"><table style="background:#0FF;" cellpadding="0" cellspacing="0" border="0"><tr><th style="height:auto">nested cell</th></tr></table> </th> </tr> <tr><th style="height:50px;background-color:#FF0"></th></tr> </table> </body> </html> </html> Any ideas? Maybe there is an easier way to define the size of the main part of the page in px using javascript? (my javascript skills are pretty poor so any help with this is welcome!)

    Read the article

  • JQuery tab Selection problem?

    - by PeAk
    New to JQuery and I was wondering how do I keep any tabbed selected when a user reloads the web page? What part of my code do I need to change? Here is my JQuery code. $(document).ready(function() { //When page loads... $(".form-content").hide(); //Hide all content var firstMenu = $("#home-menu ul li:first"); firstMenu.show(); firstMenu.find("a").addClass("selected-link"); //Activate first tab $(".form-content:first").show(); //Show first tab content //On Click Event $("#home-menu ul li").click(function() { $("#home-menu ul li a").removeClass("selected-link"); //Remove any "selected-link" class $(this).find("a").addClass("selected-link"); //Add "selected-link" class to selected tab $(".form-content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the selected-link tab + content $(activeTab).fadeIn(); //Fade in the selected-link ID content return false; }); }); Here is the XHTML code. <div id="home-menu"> <ul> <li><a href="#personal-info-form" title="Personal Info Form Link">Personal Info</a></li> <li><a href="#contact-info-form" title="Contact Info Form Link">Contact Info</a></li> </ul> </div> <div> <div id="personal-info-form" class="form-content"> <h2>Personal Information</h2> <form method="post" action="index.php"> <fieldset> <ul> <li><label for="first_name">First Name: </label><input type="text" name="first_name" id="first_name" size="25" class="input-size" value="<?php if(!empty($first_name)){ echo $first_name; } ?>" /></li> <li><label for="middle_name">Middle Name: </label><input type="text" name="middle_name" id="middle_name" size="25" class="input-size" value="<?php if(!empty($middle_name)){ echo $middle_name; } ?>" /></li> <li><label for="last_name">Last Name: </label><input type="text" name="last_name" id="last_name" size="25" class="input-size" value="<?php if(!empty($last_name)){ echo $last_name; } ?>" /></li> <li><label for="password-1">Password: </label><input type="password" name="password1" id="password-1" size="25" class="input-size" /></li> <li><label for="password-2">Confirm Password: </label><input type="password" name="password2" id="password-2" size="25" class="input-size" /></li> <li><input type="submit" name="submit" value="Save Changes" class="save-button" /> <input type="submit" name="submit" value="Preview Changes" class="preview-changes-button" /></li> </ul> </fieldset> </form> </div> <div id="contact-info-form" class="form-content"> <h2>Contact Information</h2> <form method="post" action="index.php" id="contact-form"> <fieldset> <ul> <li><label for="address">Address 1: </label><input type="text" name="address" id="address" size="25" class="input-size" value="<?php if (isset($_POST['address'])) { echo $_POST['address']; } else if(!empty($address)) { echo $address; } ?>" /></li> <li><label for="address_two">Address 2: </label><input type="text" name="address_two" id="address_two" size="25" class="input-size" value="<?php if (isset($_POST['address_two'])) { echo $_POST['address_two']; } else if(!empty($address_two)) { echo $address_two; } ?>" /></li> <li><label for="city_town">City/Town: </label><input type="text" name="city_town" id="city_town" size="25" class="input-size" value="<?php if (isset($_POST['city_town'])) { echo $_POST['city_town']; } else if(!empty($city_town)) { echo $city_town; } ?>" /></li> <li><input type="submit" name="submit" value="Save Changes" class="save-button" /> <input type="hidden" name="contact_info_submitted" value="true" /> <input type="submit" name="submit" value="Preview Changes" class="preview-changes-button" /></li> </ul> </fieldset> </form> </div> </div>

    Read the article

  • can't install psycopg2 in my env on mac os x lion

    - by Alexander Ovchinnikov
    I tried install psycopg2 via pip in my virtual env, but got this error: ld: library not found for -lpq (full log here: http://pastebin.com/XdmGyJ4u ) I tried install postgres 9.1 from .dmg and via port, (gksks)iMac-Alexander:~ lorddaedra$ locate libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq/libpq-fs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq-events.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/libpq-fe.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq/pqcomm.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/internal/libpq-int.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/auth.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/be-fsstubs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/crypt.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/hba.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/ip.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq-be.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq-fs.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/libpq.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/md5.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqcomm.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqformat.h /Developer/SDKs/MacOSX10.7.sdk/usr/include/postgresql/server/libpq/pqsignal.h /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.5.3.dylib /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.5.dylib /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.a /Developer/SDKs/MacOSX10.7.sdk/usr/lib/libpq.dylib /Library/PostgreSQL/9.1/doc/postgresql/html/install-windows-libpq.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-async.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-build.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-cancel.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-connect.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-control.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-copy.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-envars.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-events.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-example.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-exec.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-fastpath.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-ldap.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-misc.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-notice-processing.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-notify.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-pgpass.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-pgservice.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-ssl.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-status.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq-threading.html /Library/PostgreSQL/9.1/doc/postgresql/html/libpq.html /Library/PostgreSQL/9.1/include/libpq /Library/PostgreSQL/9.1/include/libpq/libpq-fs.h /Library/PostgreSQL/9.1/include/libpq-events.h /Library/PostgreSQL/9.1/include/libpq-fe.h /Library/PostgreSQL/9.1/include/postgresql/internal/libpq /Library/PostgreSQL/9.1/include/postgresql/internal/libpq/pqcomm.h /Library/PostgreSQL/9.1/include/postgresql/internal/libpq-int.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq /Library/PostgreSQL/9.1/include/postgresql/server/libpq/auth.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/be-fsstubs.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/crypt.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/hba.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/ip.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq-be.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq-fs.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/libpq.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/md5.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqcomm.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqformat.h /Library/PostgreSQL/9.1/include/postgresql/server/libpq/pqsignal.h /Library/PostgreSQL/9.1/lib/libpq.5.4.dylib /Library/PostgreSQL/9.1/lib/libpq.5.dylib /Library/PostgreSQL/9.1/lib/libpq.a /Library/PostgreSQL/9.1/lib/libpq.dylib /Library/PostgreSQL/9.1/lib/postgresql/libpqwalreceiver.so /Library/PostgreSQL/9.1/pgAdmin3.app/Contents/Frameworks/libpq.5.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.5.4.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.5.dylib /Library/PostgreSQL/psqlODBC/lib/libpq.dylib /Library/WebServer/Documents/postgresql/html/install-windows-libpq.html /Library/WebServer/Documents/postgresql/html/libpq-async.html /Library/WebServer/Documents/postgresql/html/libpq-build.html /Library/WebServer/Documents/postgresql/html/libpq-cancel.html /Library/WebServer/Documents/postgresql/html/libpq-connect.html /Library/WebServer/Documents/postgresql/html/libpq-control.html /Library/WebServer/Documents/postgresql/html/libpq-copy.html /Library/WebServer/Documents/postgresql/html/libpq-envars.html /Library/WebServer/Documents/postgresql/html/libpq-events.html /Library/WebServer/Documents/postgresql/html/libpq-example.html /Library/WebServer/Documents/postgresql/html/libpq-exec.html /Library/WebServer/Documents/postgresql/html/libpq-fastpath.html /Library/WebServer/Documents/postgresql/html/libpq-ldap.html /Library/WebServer/Documents/postgresql/html/libpq-misc.html /Library/WebServer/Documents/postgresql/html/libpq-notice-processing.html /Library/WebServer/Documents/postgresql/html/libpq-notify.html /Library/WebServer/Documents/postgresql/html/libpq-pgpass.html /Library/WebServer/Documents/postgresql/html/libpq-pgservice.html /Library/WebServer/Documents/postgresql/html/libpq-ssl.html /Library/WebServer/Documents/postgresql/html/libpq-status.html /Library/WebServer/Documents/postgresql/html/libpq-threading.html /Library/WebServer/Documents/postgresql/html/libpq.html /opt/local/include/postgresql90/internal/libpq /opt/local/include/postgresql90/internal/libpq/pqcomm.h /opt/local/include/postgresql90/internal/libpq-int.h /opt/local/include/postgresql90/libpq /opt/local/include/postgresql90/libpq/libpq-fs.h /opt/local/include/postgresql90/libpq-events.h /opt/local/include/postgresql90/libpq-fe.h /opt/local/include/postgresql90/server/libpq /opt/local/include/postgresql90/server/libpq/auth.h /opt/local/include/postgresql90/server/libpq/be-fsstubs.h /opt/local/include/postgresql90/server/libpq/crypt.h /opt/local/include/postgresql90/server/libpq/hba.h /opt/local/include/postgresql90/server/libpq/ip.h /opt/local/include/postgresql90/server/libpq/libpq-be.h /opt/local/include/postgresql90/server/libpq/libpq-fs.h /opt/local/include/postgresql90/server/libpq/libpq.h /opt/local/include/postgresql90/server/libpq/md5.h /opt/local/include/postgresql90/server/libpq/pqcomm.h /opt/local/include/postgresql90/server/libpq/pqformat.h /opt/local/include/postgresql90/server/libpq/pqsignal.h /opt/local/lib/postgresql90/libpq.5.3.dylib /opt/local/lib/postgresql90/libpq.5.dylib /opt/local/lib/postgresql90/libpq.a /opt/local/lib/postgresql90/libpq.dylib /opt/local/lib/postgresql90/libpqwalreceiver.so /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx/Portfile /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx26 /opt/local/var/macports/sources/rsync.macports.org/release/tarballs/ports/databases/libpqxx26/Portfile /usr/include/libpq /usr/include/libpq/libpq-fs.h /usr/include/libpq-events.h /usr/include/libpq-fe.h /usr/include/postgresql/internal/libpq /usr/include/postgresql/internal/libpq/pqcomm.h /usr/include/postgresql/internal/libpq-int.h /usr/include/postgresql/server/libpq /usr/include/postgresql/server/libpq/auth.h /usr/include/postgresql/server/libpq/be-fsstubs.h /usr/include/postgresql/server/libpq/crypt.h /usr/include/postgresql/server/libpq/hba.h /usr/include/postgresql/server/libpq/ip.h /usr/include/postgresql/server/libpq/libpq-be.h /usr/include/postgresql/server/libpq/libpq-fs.h /usr/include/postgresql/server/libpq/libpq.h /usr/include/postgresql/server/libpq/md5.h /usr/include/postgresql/server/libpq/pqcomm.h /usr/include/postgresql/server/libpq/pqformat.h /usr/include/postgresql/server/libpq/pqsignal.h /usr/lib/libpq.5.3.dylib /usr/lib/libpq.5.dylib /usr/lib/libpq.a /usr/lib/libpq.dylib How to tell pip to use this lib in /Library/PostgreSQL/9.1/lib/ (or may be in /usr/lib)? or may be install this lib again in my env (i try keep my env isolated from mac as possible)

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • ERROR: Attempted to read or write protected memory. This is often an indication that other memory is corrupt

    - by SPSamL
    I get this error after having edited a few pages in SharePoint 2010. I have to do an IISReset on both front ends to get this to resolve. I don't know how to fix it or even what else to supply here, but please let me know as the resets now happen several times per day. Log Name: Application Source: ASP.NET 2.0.50727.0 Date: 1/26/2011 11:12:48 AM Event ID: 1309 Task Category: Web Event Level: Warning Keywords: Classic User: N/A Computer: PINTSPSFE02.samcstl.org Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 1/26/2011 11:12:48 AM Event time (UTC): 1/26/2011 5:12:48 PM Event ID: c52fb336b7f147a3913fff3617a99d57 Event sequence: 4965 Event occurrence: 2178 Event detail code: 0 Application information: Application domain: /LM/W3SVC/1449762715/ROOT-2-129405348166941887 Trust level: WSS_Minimal Application Virtual Path: / Application Path: C:\inetpub\wwwroot\wss\VirtualDirectories\80\ Machine name: PINTSPSFE02 Process information: Process ID: 5928 Process name: w3wp.exe Account name: SAMC\MossAppPool Exception information: Exception type: AccessViolationException Exception message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. Request information: Request URL: http://mosscluster/Pages/Home.aspx Request path: /Pages/Home.aspx User host address: 10.3.60.26 User: SAMC\BARNMD Is authenticated: True Authentication Type: NTLM Thread account name: SAMC\MossAppPool Thread information: Thread ID: 110 Thread account name: SAMC\MossAppPool Is impersonating: False Stack trace: at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Custom event details: Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="ASP.NET 2.0.50727.0" /> <EventID Qualifiers="32768">1309</EventID> <Level>3</Level> <Task>3</Task> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-01-26T17:12:48.000000000Z" /> <EventRecordID>35834</EventRecordID> <Channel>Application</Channel> <Computer>PINTSPSFE02.samcstl.org</Computer> <Security /> </System> <EventData> <Data>3005</Data> <Data>An unhandled exception has occurred.</Data> <Data>1/26/2011 11:12:48 AM</Data> <Data>1/26/2011 5:12:48 PM</Data> <Data>c52fb336b7f147a3913fff3617a99d57</Data> <Data>4965</Data> <Data>2178</Data> <Data>0</Data> <Data>/LM/W3SVC/1449762715/ROOT-2-129405348166941887</Data> <Data>WSS_Minimal</Data> <Data>/</Data> <Data>C:\inetpub\wwwroot\wss\VirtualDirectories\80\</Data> <Data>PINTSPSFE02</Data> <Data> </Data> <Data>5928</Data> <Data>w3wp.exe</Data> <Data>SAMC\MossAppPool</Data> <Data>AccessViolationException</Data> <Data></Data> <Data>http://mosscluster/Pages/Home.aspx</Data> <Data>/Pages/Home.aspx</Data> <Data>10.3.60.26</Data> <Data>SAMC\BARNMD</Data> <Data>True</Data> <Data>NTLM</Data> <Data>SAMC\MossAppPool</Data> <Data>110</Data> <Data>SAMC\MossAppPool</Data> <Data>False</Data> <Data> at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Delete(String key, Boolean recursive, DeletionReason reason) at Microsoft.Office.Server.ObjectCache.SPCache.MossObjectCache_Tracked.Get(String key) at Microsoft.Office.Server.ObjectCache.SPCache.Get(String objectTypeName, String id) at Microsoft.Office.Server.Administration.UserProfileServiceProxy.GetPartitionPropertiesCache(Guid applicationID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.get_PartitionPropertiesCache() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.DataCache.get_PartitionProperties() at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, Guid partitionID) at Microsoft.Office.Server.Administration.UserProfileApplicationProxy.GetMySitePortalUrl(SPUrlZone zone, SPServiceContext serviceContext) at Microsoft.Office.Server.WebControls.MyLinksRibbon.EnsureMySiteUrls() at Microsoft.Office.Server.WebControls.MyLinksRibbon.get_PortalMySiteUrlAvailable() at Microsoft.Office.Server.WebControls.MyLinksRibbon.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) </Data> </EventData> </Event>

    Read the article

  • Emails forwarded via postfix get flagged as spam and forged in Gmail

    - by Kendall Hopkins
    I'm trying to setup a forwarding only email server. I'm running into the problem where all messages forwarded via postfix are getting put into gmail's spam folder and getting flagged as forged. I'm testing a very similar setup on a cpanel box and their forwarded emails make it through without any problem. Things I've done: Setup reverse dns on forwarding box Setup SPF record for forwarding box domain CPanel route (not flagged as spam): [email protected] - [email protected] - [email protected] AWS postfix route (flagged as spam): [email protected] - [email protected] - [email protected] Gmail error message: /etc/postfix/main.cf myhostname = sputnik.*domain*.com smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no append_dot_mydomain = no readme_directory = no myorigin = /etc/mailname mydestination = sputnik.*domain*.com, localhost.*domain*.com, , localhost relayhost = mynetworks = 127.0.0.0/8 10.0.0.0/24 [::1]/128 [fe80::%eth0]/64 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = all virtual_alias_maps = hash:/etc/postfix/virtual Email forwarded by CPanel (doesn't get marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14396obb; Wed, 9 May 2012 09:18:36 -0700 (PDT) Received: by 10.182.52.38 with SMTP id q6mr1137571obo.8.1336580316700; Wed, 09 May 2012 09:18:36 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from web6.*domain*.com (173.193.55.66-static.reverse.softlayer.com. [173.193.55.66]) by mx.google.com with ESMTPS id ec7si1845451obc.67.2012.05.09.09.18.36 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 09 May 2012 09:18:36 -0700 (PDT) Received-SPF: neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=173.193.55.66; Authentication-Results: mx.google.com; spf=neutral (google.com: 173.193.55.66 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f43.google.com ([209.85.212.43]:56152) by web6.*domain*.com with esmtps (TLSv1:RC4-SHA:128) (Exim 4.77) (envelope-from <mail@*personaldomain*.com>) id 1SS9b2-0007J9-LK for mail@kendall.*domain*.com; Wed, 09 May 2012 12:18:36 -0400 Received: by vbbfq11 with SMTP id fq11so599132vbb.2 for <mail@kendall.*domain*.com>; Wed, 09 May 2012 09:18:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=Hr0AH40uUtx/w/u9hltbrhHJhRaD5ubKmz2gGg44VLs=; b=IBKi6Xalr9XVFYwdkWxn9PLRB69qqJ9AjUPdvGh8VxMNW4S+hF6r4GJcGOvkDn2drO kw5r4iOpGuWUQPEMHRPyO4+Ozc9SE9s4Px2oVpadR6v3hO+utvFGoj7UuchsXzHqPVZ8 A9FS4cKiE0E0zurTjR7pfQtZT64goeEJoI/CtvcoTXj/Mdrj36gZ2FYtO8Qj4dFXpfu9 uGAKa4jYfx9zwdvhLzQ3mouWwQtzssKUD+IvyuRppLwI2WFb9mWxHg9n8y9u5IaduLn7 7TvLIyiBtS3DgqSKQy18POVYgnUFilcDorJs30hxFxJhzfTFW1Gdhrwjvz0MTYDSRiGQ P4aw== MIME-Version: 1.0 Received: by 10.52.173.209 with SMTP id bm17mr326586vdc.54.1336580315681; Wed, 09 May 2012 09:18:35 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:18:35 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:18:35 -0400 Message-ID: <CA+tP6Viyn0ms5RJoqtd20ms3pmQCgyU0yy7GBiaALEACcDBC2g@mail.gmail.com> Subject: test5 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@kendall.*domain*.com Content-Type: multipart/alternative; boundary=bcaec51b9bf5ee11c004bf9cda9c X-Gm-Message-State: ALoCoQm3t1Hohu7fEr5zxQZsC8FQocg662Jv5MXlPXBnPnx2AiQrbLsNQNknLy39Su45xBMCM47K X-AntiAbuse: This header was added to track abuse, please include it with any abuse report X-AntiAbuse: Primary Hostname - web6.*domain*.com X-AntiAbuse: Original Domain - kendall.*domain*.com X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12] X-AntiAbuse: Sender Address Domain - *personaldomain*.com X-Source: X-Source-Args: X-Source-Dir: --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/plain; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c Content-Type: text/html; charset=ISO-8859-1 test5 --bcaec51b9bf5ee11c004bf9cda9c-- Email forwarded via AWS postfix box (marked as spam): Delivered-To: *personaluser*@gmail.com Received: by 10.182.144.98 with SMTP id sl2csp14350obb; Wed, 9 May 2012 09:17:46 -0700 (PDT) Received: by 10.229.137.143 with SMTP id w15mr389471qct.37.1336580266237; Wed, 09 May 2012 09:17:46 -0700 (PDT) Return-Path: <mail@*personaldomain*.com> Received: from sputnik.*domain*.com (sputnik.*domain*.com. [107.21.39.201]) by mx.google.com with ESMTP id o8si1330855qct.115.2012.05.09.09.17.46; Wed, 09 May 2012 09:17:46 -0700 (PDT) Received-SPF: neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) client-ip=107.21.39.201; Authentication-Results: mx.google.com; spf=neutral (google.com: 107.21.39.201 is neither permitted nor denied by best guess record for domain of mail@*personaldomain*.com) smtp.mail=mail@*personaldomain*.com Received: from mail-vb0-f52.google.com (mail-vb0-f52.google.com [209.85.212.52]) by sputnik.*domain*.com (Postfix) with ESMTP id A308122AD6 for <mail@*personaldomain2*.com>; Wed, 9 May 2012 16:17:45 +0000 (UTC) Received: by vbzb23 with SMTP id b23so448664vbz.25 for <mail@*personaldomain2*.com>; Wed, 09 May 2012 09:17:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=XAzjH9tUXn6SbadVSLwJs2JVbyY4arosdTuV8Nv+ARI=; b=U8gIgHd6mhWYqPU4MH/eyvo3kyZsDn/GiYwZj5CLbs6Zz/ZOXQkenRi7zW3ewVFi/9 uAFylT8SQ+Wjw2l6OgAioCTojfZ58s4H/JW+1bu460KAP9aeOTcZDNSsHlsj0wvH5XRV 4DQJa11kz+WFVtVVcFuB33WVUPAgJfXzY+pSTe+FWsrZyrrwL7/Vm9TSKI5PBwRN9i4g zAZabgkmw1o2THT3kbJi6vAbPzlqK2LVbgt82PP0emHdto7jl4iD5F6lVix4U0dsrtRv xuGUE0gDyIwJuR4Q5YTkNubwGH/Y2bFBtpx2q1IORANrolWxIGaZSceUWawABkBGPABX 1/eg== MIME-Version: 1.0 Received: by 10.52.96.169 with SMTP id dt9mr282954vdb.107.1336580265812; Wed, 09 May 2012 09:17:45 -0700 (PDT) Received: by 10.220.191.134 with HTTP; Wed, 9 May 2012 09:17:45 -0700 (PDT) X-Originating-IP: [99.50.225.7] Date: Wed, 9 May 2012 12:17:45 -0400 Message-ID: <CA+tP6VgqZrdxP543Y28d1eMwJAs4DxkS4EE6bvRL8nFoMkgnQQ@mail.gmail.com> Subject: test4 From: Kendall Hopkins <mail@*personaldomain*.com> To: mail@*personaldomain2*.com Content-Type: multipart/alternative; boundary=20cf307f37f6f521b304bf9cd79d X-Gm-Message-State: ALoCoQkrNcfSTWz9t6Ir87KEYyM+zJM4y1AbwP86NMXlk8B3ALhnis+olFCKdgPnwH/sIdzF3+Nh --20cf307f37f6f521b304bf9cd79d Content-Type: text/plain; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d Content-Type: text/html; charset=ISO-8859-1 test4 --20cf307f37f6f521b304bf9cd79d--

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • Failing Sata HDD

    - by DaveCol
    I think my HDD is fried... Could someone confirm or help me restore it? I was using Hardware RAID 1 Configuration [2 x 160GB SATA HDD] on a CentOS 4 Installation. All of a sudden I started seeing bad sectors on the second HDD which stopped being mirrored. I have removed the RAID array and have tested with SMART which showed the following error: 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 I have no clue what this means, or if I can recover from it. Could someone give me some ideas on how to fix this, or what HDD to get to replace this? Complete SMART report: Smartctl version 5.33 [i686-redhat-linux-gnu] Copyright (C) 2002-4 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: GB0160CAABV Serial Number: 6RX58NAA Firmware Version: HPG1 User Capacity: 160,041,885,696 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 7 ATA Standard is: ATA/ATAPI-7 T13 1532D revision 4a Local Time is: Tue Oct 19 13:42:42 2010 COT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 433) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 54) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 253 006 Pre-fail Always - 0 3 Spin_Up_Time 0x0002 097 097 000 Old_age Always - 0 4 Start_Stop_Count 0x0033 100 100 020 Pre-fail Always - 152 5 Reallocated_Sector_Ct 0x0033 095 095 036 Pre-fail Always - 214 7 Seek_Error_Rate 0x000f 078 060 030 Pre-fail Always - 73109713 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 15133 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0033 100 100 020 Pre-fail Always - 154 184 Unknown_Attribute 0x0032 038 038 000 Old_age Always - 62 187 Unknown_Attribute 0x003a 001 001 051 Old_age Always FAILING_NOW 4645 189 Unknown_Attribute 0x0022 100 100 000 Old_age Always - 0 190 Unknown_Attribute 0x001a 061 055 000 Old_age Always - 656408615 194 Temperature_Celsius 0x0000 039 045 000 Old_age Offline - 39 (Lifetime Min/Max 0/22) 195 Hardware_ECC_Recovered 0x0032 070 059 000 Old_age Always - 12605265 197 Current_Pending_Sector 0x0000 100 100 000 Old_age Offline - 1 198 Offline_Uncorrectable 0x0000 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0000 200 200 000 Old_age Offline - 62 SMART Error Log Version: 1 ATA Error Count: 4645 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 4645 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 02 7b 86 b1 ea 00 00:38:52.796 READ DMA ec 03 45 00 00 00 a0 00 00:38:52.796 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:52.794 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 04 79 86 b1 ea 00 00:38:49.935 READ DMA Error 4644 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 04 79 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:49.991 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:49.935 READ DMA Error 4643 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4642 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA Error 4641 occurred at disk power-on lifetime: 15132 hours (630 days + 12 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 40 51 00 7b 86 b1 ea Error: UNC at LBA = 0x0ab1867b = 179406459 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 06 77 86 b1 ea 00 00:38:41.517 READ DMA ec 03 45 00 00 00 a0 00 00:38:41.515 IDENTIFY DEVICE ef 03 45 00 00 00 a0 00 00:38:41.515 SET FEATURES [Set transfer mode] ec 00 00 7b 86 b1 a0 00 00:38:41.513 IDENTIFY DEVICE c8 00 06 77 86 b1 ea 00 00:38:38.706 READ DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15131 - # 2 Short offline Completed without error 00% 15131 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • DRBD Not syncing between my nodes

    - by Mike Curry
    Some version info: Operating system is Ubuntu 11.10, on EC2, kernel is 3.0.0-16-virtual and the application info is: Version: 8.3.11 (api:88) GIT-hash: 0de839cee13a4160eed6037c4bddd066645e23c5 build by buildd@allspice, 2011-07-05 19:51:07 Getting some strange errors in dmesg (seen below) as well, there is no replication happening. I have made my first node primary and its showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Primary/Unknown UpToDate/DUnknown r----s ext3 my secondary node is showing: drbd driver loaded OK; device status: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 m:res cs ro ds p mounted fstype 0:r0 StandAlone Secondary/Unknown Inconsistent/DUnknown r----s Showing /proc/drbd on the master shows: version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Primary/Unknown ds:UpToDate/DUnknown r----s ns:0 nr:0 dw:4 dr:1073 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Showing /proc/drbd on the slave shows that there is nothing being transfered... version: 8.3.11 (api:88/proto:86-96) srcversion: DA5A13F16DE6553FC7CE9B2 0: cs:StandAlone ro:Secondary/Unknown ds:Inconsistent/DUnknown r----s ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:262135964 Here is my config... resource r0 { protocol C; startup { wfc-timeout 15; degr-wfc-timeout 60; } net { cram-hmac-alg sha1; shared-secret "test123; } on drbd01 { device /dev/drbd0; disk /dev/xvdm; address 23.XX.XX.XX:7788; # blocked out ip meta-disk internal; } on drbd02 { device /dev/drbd0; disk /dev/xvdm; address 184.XX.XX.XX:7788; #blocked out ip meta-disk internal; } } I have run the following on the master: sudo drbdadm -- --overwrite-data-of-peer primary all There is no firewall between the systems. Here is the dmesg with some errors: [2285172.969955] drbd: initialized. Version: 8.3.11 (api:88/proto:86-96) [2285172.969960] drbd: srcversion: DA5A13F16DE6553FC7CE9B2 [2285172.969962] drbd: registered as block device major 147 [2285172.969965] drbd: minor_table @ 0xffff88000276ea00 [2285173.000952] block drbd0: Starting worker thread (from drbdsetup [1300]) [2285173.003971] block drbd0: disk( Diskless -> Attaching ) [2285173.006150] block drbd0: No usable activity log found. [2285173.006154] block drbd0: Method to ensure write ordering: flush [2285173.006158] block drbd0: max BIO size = 4096 [2285173.006165] block drbd0: drbd_bm_resize called with capacity == 524271928 [2285173.008512] block drbd0: resync bitmap: bits=65533991 words=1023969 pages=2000 [2285173.008518] block drbd0: size = 250 GB (262135964 KB) [2285173.079566] block drbd0: bitmap READ of 2000 pages took 17 jiffies [2285173.081189] block drbd0: recounting of set bits took additional 1 jiffies [2285173.081194] block drbd0: 250 GB (65533991 bits) marked out-of-sync by on disk bit-map. [2285173.081203] block drbd0: Suspended AL updates [2285173.081210] block drbd0: disk( Attaching -> UpToDate ) [2285173.081214] block drbd0: attached to UUIDs 1C1291D39584C1D1:0000000000000004:0000000000000000:0000000000000000 [2285173.095016] block drbd0: conn( StandAlone -> Unconnected ) [2285173.095046] block drbd0: Starting receiver thread (from drbd0_worker [1301]) [2285173.099297] block drbd0: receiver (re)started [2285173.099304] block drbd0: conn( Unconnected -> WFConnection ) [2285173.099330] block drbd0: bind before connect failed, err = -99 [2285173.099346] block drbd0: conn( WFConnection -> Disconnecting ) [2285173.295788] block drbd0: Discarding network configuration. [2285173.295815] block drbd0: Connection closed [2285173.295826] block drbd0: conn( Disconnecting -> StandAlone ) [2285173.295840] block drbd0: receiver terminated [2285173.295844] block drbd0: Terminating drbd0_receiver Edit: Reading some other similar issues, it was suggested to do a 'drbdadm dump all', so I figured it couldn't hurt. ubuntu@drbd01:~$ drbdadm dump all /etc/drbd.conf:19: in resource r0, on drbd01: IP 23.XX.XX.XX not found on this host. and on slave: root@drbd02:~# drbdadm dump all /etc/drbd.conf:25: in resource r0, on drbd02: IP 184.XX.XX.XX not found on this host. Strange it doesn't find its own ip, however, this is an Amazon EC2 system using an elastic IP... here are my ipconfigs for both... master: ubuntu@drbd01:~$ ifconfig eth0 Link encap:Ethernet HWaddr 22:00:0a:1c:27:11 inet addr:10.28.39.17 Bcast:10.28.39.63 Mask:255.255.255.192 inet6 addr: fe80::2000:aff:fe1c:2711/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1569 errors:0 dropped:0 overruns:0 frame:0 TX packets:1169 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:124409 (124.4 KB) TX bytes:213601 (213.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) slave: root@drbd02:~# ifconfig eth0 Link encap:Ethernet HWaddr 12:31:3f:00:14:9d inet addr:10.160.27.107 Bcast:10.160.27.255 Mask:255.255.254.0 inet6 addr: fe80::1031:3fff:fe00:149d/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:915 errors:0 dropped:0 overruns:0 frame:0 TX packets:774 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:75381 (75.3 KB) TX bytes:109673 (109.6 KB) Interrupt:26 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

    Read the article

  • OpenVPN on ec2 bridged mode connects but no Ping, DNS or forwarding

    - by michael
    I am trying to use OpenVPN to access the internet over a secure connection. I have openVPN configured and running on Amazon EC2 in bridge mode with client certs. I can successfully connect from the client, but I cannot get access to the internet or ping anything from the client I checked the following and everything seems to shows a successful connection between the vpn client/server and UDP traffic on 1194 [server] sudo tcpdump -i eth0 udp port 1194 (shows UDP traffic after establishing connection) [server] sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] sudo iptables -L -t nat Chain PREROUTING (policy ACCEPT) target prot opt source destination Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- ip-W-X-Y-0.us-west-1.compute.internal/24 anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination [server] openvpn.log Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 [localhost] Inactivity timeout (--ping-restart), restarting Wed Oct 19 03:11:26 2011 localhost/a.b.c.d:61905 SIGUSR1[soft,ping-restart] received, client-instance restarting Wed Oct 19 03:41:31 2011 MULTI: multi_create_instance called Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Re-using SSL/TLS context Wed Oct 19 03:41:31 2011 a.b.c.d:57889 LZO compression initialized Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Control Channel MTU parms [ L:1574 D:166 EF:66 EB:0 ET:0 EL:0 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Data Channel MTU parms [ L:1574 D:1450 EF:42 EB:135 ET:32 EL:0 AF:3/1 ] Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Local Options hash (VER=V4): '360696c5' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 Expected Remote Options hash (VER=V4): '13a273ba' Wed Oct 19 03:41:31 2011 a.b.c.d:57889 TLS: Initial packet from [AF_INET]a.b.c.d:57889, sid=dd886604 ab6ebb38 Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=1, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=EXAMPLE_CA/[email protected] Wed Oct 19 03:41:35 2011 a.b.c.d:57889 VERIFY OK: depth=0, /C=US/ST=CA/L=SanFrancisco/O=EXAMPLE/CN=localhost/[email protected] Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Oct 19 03:41:37 2011 a.b.c.d:57889 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Oct 19 03:41:37 2011 a.b.c.d:57889 [localhost] Peer Connection Initiated with [AF_INET]a.b.c.d:57889 Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 PUSH: Received control message: 'PUSH_REQUEST' Wed Oct 19 03:41:39 2011 localhost/a.b.c.d:57889 SENT CONTROL [localhost]: 'PUSH_REPLY,redirect-gateway def1 bypass-dhcp,route-gateway W.X.Y.Z,ping 10,ping-restart 120,ifconfig W.X.Y.Z 255.255.255.0' (status=1) Wed Oct 19 03:41:40 2011 localhost/a.b.c.d:57889 MULTI: Learn: (IPV6) -> localhost/a.b.c.d:57889 [client] tracert google.com Tracing route to google.com [74.125.71.104] over a maximum of 30 hops: 1 347 ms 349 ms 348 ms PC [w.X.Y.Z] 2 * * * Request timed out. I can also successfully ping the server IP address from the client, and ping google.com from an SSH shell on the server. What am I doing wrong? Here is my config (Note: W.X.Y.Z == amazon EC2 private ipaddress) bridge config on br0 ifconfig eth0 0.0.0.0 promisc up brctl addbr br0 brctl addif br0 eth0 ifconfig br0 W.X.Y.X netmask 255.255.255.0 broadcast W.X.Y.255 up route add default gw W.X.Y.1 br0 /etc/openvpn/server.conf (from https://help.ubuntu.com/10.04/serverguide/C/openvpn.html) local W.X.Y.Z dev tap0 up "/etc/openvpn/up.sh br0" down "/etc/openvpn/down.sh br0" ;server W.X.Y.0 255.255.255.0 server-bridge W.X.Y.Z 255.255.255.0 W.X.Y.105 W.X.Y.200 ;push "route W.X.Y.0 255.255.255.0" push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" tls-auth ta.key 0 # This file is secret user nobody group nogroup log-append openvpn.log iptables config sudo iptables -A INPUT -i tap0 -j ACCEPT sudo iptables -A INPUT -i br0 -j ACCEPT sudo iptables -A FORWARD -i br0 -j ACCEPT sudo iptables -t nat -A POSTROUTING -s W.X.Y.0/24 -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward Routing Tables added route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface W.X.Y.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 0.0.0.0 W.X.Y.1 0.0.0.0 UG 0 0 0 br0 C:>route print =========================================================================== Interface List 32...00 ff ac d6 f7 04 ......TAP-Win32 Adapter V9 15...00 14 d1 e9 57 49 ......Microsoft Virtual WiFi Miniport Adapter #2 14...00 14 d1 e9 57 49 ......Realtek RTL8191SU Wireless LAN 802.11n USB 2.0 Net work Adapter 10...00 1f d0 50 1b ca ......Realtek PCIe GBE Family Controller 1...........................Software Loopback Interface 1 11...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 17...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 18...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #3 36...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #5 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.1.2.1 10.1.2.201 25 10.1.2.0 255.255.255.0 On-link 10.1.2.201 281 10.1.2.201 255.255.255.255 On-link 10.1.2.201 281 10.1.2.255 255.255.255.255 On-link 10.1.2.201 281 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 10.1.2.201 281 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 10.1.2.201 281 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 10.1.2.1 Default =========================================================================== C:>tracert google.com Tracing route to google.com [74.125.71.147] over a maximum of 30 hops: 1 344 ms 345 ms 343 ms PC [W.X.Y.221] 2 * * * Request timed out.

    Read the article

  • nginx can't load images,css,js

    - by EquinoX
    When I point to a URL in nginx where it has images extension such as: http://50.56.81.42/phpMyAdmin/themes/original/img/logo_right.png (as example) it gives me the 404 error as it can't find the file, but the file is actually there. What is potentially wrong? UPDATE: Here's the error log that I was able to pull out: 2011/02/27 05:53:29 [error] 18679#0: *225 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *226 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *228 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *223 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" 2011/02/27 05:54:39 [error] 18679#0: *237 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *235 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *238 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *239 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" Here's my nginx.conf file, in case I am missing something: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.(js|css|png|jpg|jpeg|gif|ico|html)$ { expires max; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } What does this mean? It can't pull out the .css, etc....

    Read the article

  • Cyrus on CentOS with sasl / pam / ldap

    - by Oscar
    SASL/PAM/LDAP is driving me crazy... that's what I read a lot when googling for problems in this area, and what I experience myself :-S I'm trying to get Cyrus imap working for virtual hosting on CentOS with this authorisation backend and really don't know what's happening. In saslauthd I configured the LDAP search filter to use, but it looks like pam completely ignores it. Here's what I do for testing (done more tests but all with similar results): [root@testserv ~]# imtest -u [email protected] -a [email protected] WARNING: no hostname supplied, assuming localhost S: * OK [CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS] testserv. Cyrus IMAP4 v2.3.7-Invoca-RPM-2.3.7-7.el5_6.4 server ready C: C01 CAPABILITY S: * CAPABILITY IMAP4 IMAP4rev1 LITERAL+ ID STARTTLS ACL RIGHTS=kxte QUOTA MAILBOX-REFERRALS NAMESPACE UIDPLUS NO_ATOMIC_RENAME UNSELECT CHILDREN MULTIAPPEND BINARY SORT SORT=MODSEQ THREAD=ORDEREDSUBJECT THREAD=REFERENCES ANNOTATEMORE CATENATE CONDSTORE IDLE LISTEXT LIST-SUBSCRIBED X-NETSCAPE URLAUTH S: C01 OK Completed Please enter your password: C: L01 LOGIN [email protected] {6} S: + go ahead C: <omitted> S: L01 NO Login failed: authentication failure Authentication failed. generic failure Security strength factor: 0 C: Q01 LOGOUT * BYE LOGOUT received Q01 OK Completed Connection closed. The LDAP entry does exist (and so does the mailbox in Cyrus): [root@testserv ~]# ldapsearch -WxD cn=Manager,o=mydomain,c=com [email protected] Enter LDAP Password: # extended LDIF # # LDAPv3 # base <> with scope subtree # filter: [email protected] # requesting: ALL # # myuser, accounts, testserv.mydomain.com, mydomain, com dn: uid=myuser,ou=accounts,dc=testserv.mydomain.com,o=mydomain,c=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uidNumber: 16 uid: myuser gidNumber: 5 givenName: My sn: Name mail: [email protected] cn: My Name userPassword:: dYN5ebB0fXhNRn1pZllhRnJX7Uk= shadowLastChange: 15176 homeDirectory: /dev/null # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 This is what I get in /var/log/messages Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] ... /var/adm/auth.log Aug 2 04:00:11 testserv cyrus/imap[12514]: auxpropfunc error invalid parameter supplied Aug 2 04:00:11 testserv cyrus/imap[12514]: _sasl_plugin_load failed on sasl_auxprop_plug_init for plugin: ldapdb Aug 2 04:00:19 testserv saslauthd[5926]: DEBUG: auth_pam: pam_authenticate failed: User not known to the underlying authentication module Aug 2 04:00:19 testserv saslauthd[5926]: do_auth : auth failure: [[email protected]] [service=imap] [realm=testserv.mydomain.com] [mech=pam] [reason=PAM auth error] (AFAIK I can ignore the auxprop msg) ... and /var/log/slapd.log: Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 ACCEPT from IP=127.0.0.1:51403 (IP=0.0.0.0:389) Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 BIND dn="" method=128 Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=0 RESULT tag=97 err=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SRCH base="o=mydomain,c=com" scope=2 deref=0 filter="([email protected])" Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=1 SEARCH RESULT tag=101 err=0 nentries=0 text= Aug 2 04:00:19 testserv slapd[5968]: conn=61 op=2 UNBIND Aug 2 04:00:19 testserv slapd[5968]: conn=61 fd=27 closed These are the settings in In /etc/imapd.conf: sasl_mech_list: PLAIN LOGIN sasl_pwcheck_method: saslauthd ## sasl_auxprop_plugin: sasldb sasl_auto_transition: no and my sasl config: [root@testserv ~]# cat /etc/sysconfig/saslauthd # Directory in which to place saslauthd's listening socket, pid file, and so # on. This directory must already exist. SOCKETDIR=/var/run/saslauthd # Mechanism to use when checking passwords. Run "saslauthd -v" to get a list # of which mechanism your installation was compiled with the ablity to use. MECH=pam # Additional flags to pass to saslauthd on the command line. See saslauthd(8) # for the list of accepted flags. FLAGS="-c -r -O /etc/saslauthd.conf" [root@testserv ~]# cat /etc/saslauthd.conf ldap_servers: ldap://127.0.0.1/ ldap_search_base: dc=%d,o=mydomain,c=com ldap_auth_method: bind #ldap_filter: (|(uid=%u)((&(mail=%u@%d)(accountStatus=active))) ldap_filter: (&(mail=%u@%d)(accountStatus=active)) ldap_debug: 1 ldap_version: 3 The accountStatus=active is not in ldap yet, but that doesn't make a difference since I don't see it in the filter... that's not the reason for the failure. The weird thing is, I do get an error when I rename or remove /etc/saslauthd.conf, but when the file exists it seems happily ignored... The filter in slapd.log seems to be taken from /etc/ldap.conf. Apart from some timers, that only contains: host 127.0.0.1 base o=mydomain,c=com pam_login_attribute mail Outcommenting the pam_login_attribute results in this filter in slapd.log: filter="([email protected])" Pam-imap looks like this: [root@testserv ~]# cat /etc/pam.d/imap auth required pam_ldap.so debug account required pam_ldap.so debug #auth sufficient pam_unix.so likeauth nullok #auth sufficient pam_ldap.so use_first_pass #auth required pam_deny.so #account sufficient pam_unix.so #account sufficient pam_ldap.so The outcommented stuff is because I don't have the cyrus admin user in Ldap; that's a Linux user. That works fine when uncommented, but I still need to play around with that a little and first I wanna get imap working. Finally nsswitch: [root@testserv ~]# cat /etc/nsswitch.conf # # /etc/nsswitch.conf # # An example Name Service Switch config file. This file should be # sorted with the most-used services at the beginning. # # The entry '[NOTFOUND=return]' means that the search for an # entry should stop if the search in the previous entry turned # up nothing. Note that if the search failed due to some other reason # (like no NIS server responding) then the search continues with the # next entry. # # Legal entries are: # # nisplus or nis+ Use NIS+ (NIS version 3) # nis or yp Use NIS (NIS version 2), also called YP # dns Use DNS (Domain Name Service) # files Use the local files # db Use the local database (.db) files # compat Use NIS on compat mode # hesiod Use Hesiod for user lookups # [NOTFOUND=return] Stop searching if not found so far # # To use db, put the "db" in front of "files" for entries you want to be # looked up first in the databases # # Example: #passwd: db files nisplus nis #shadow: db files nisplus nis #group: db files nisplus nis passwd: compat ldap group: compat ldap shadow: compat ldap hosts: files dns bootparams: nisplus [NOTFOUND=return] files ethers: files netmasks: files networks: files protocols: files rpc: files services: files netgroup: nisplus publickey: nisplus automount: files nisplus aliases: files nisplus Any info where to start looking will be greatly appreciated! Thnx in advance

    Read the article

  • KVM machine does not start ssh, network is started, used to work

    - by lleto
    have been searching an pulling my hear out for the last 6 hours. I have a virtual machine that has been running fine for the last six months. I was happy ssh'ing into it and it was running a database and some small apps. Tonight ssh stopped working, so I decided to reboot the machine. I now have the following situation: virsh list --all states machine as running I can ping the machine and get a reply When I ssh to the machine I see "ssh: connect to host [myserver] port 22: Connection refused" nmap does not show port 22 as open I have tried to: - reboot the machine once more (no luck) - mount the filesystem and check /etc/ssh/sshd.conf (has not changed since working situation) - install virsh console, however this does not seem to work When I mount the fs directly using losetup the strange thing is that file dates seem to be frozen in /var/log/ around the time of the crash. If I look in /var/run/ I can see an sshd.pid, but the time is 6 hours ago (and numerous reboots). My virsh xml looks like this: <domain type='kvm' id='21'> <name>myserver</name> <uuid>09678c8d-a99b-1d18-a7af-88d027cc8f93</uuid> <memory>1048576</memory> <currentMemory>1048576</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/dev/disk01/myserver'/> <target dev='hda' bus='ide'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <alias name='ide0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='bridge'> <mac address='52:54:00:e3:13:86'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1'> <listen type='address' address='127.0.0.1'/> </graphics> <video> <model type='cirrus' vram='9216' heads='1'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='apparmor' relabel='yes'> <label>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</label> <imagelabel>libvirt-09678c8d-a99b-1d18-a7af-88d027cc8f93</imagelabel> </seclabel> </domain> I'm sort of lost as to where I can look to get the machine up and running again. On the same instance of kvm I have another server running which is working fine. Both are Ubuntu 12.04. All help is welcome....

    Read the article

< Previous Page | 589 590 591 592 593 594 595 596 597 598 599 600  | Next Page >