Search Results

Search found 20336 results on 814 pages for 'connection strings'.

Page 812/814 | < Previous Page | 808 809 810 811 812 813 814  | Next Page >

  • Parsing concatenated, non-delimited XML messages from TCP-stream using C#

    - by thaller
    I am trying to parse XML messages which are send to my C# application over TCP. Unfortunately, the protocol can not be changed and the XML messages are not delimited and no length prefix is used. Moreover the character encoding is not fixed but each message starts with an XML declaration <?xml>. The question is, how can i read one XML message at a time, using C#. Up to now, I tried to read the data from the TCP stream into a byte array and use it through a MemoryStream. The problem is, the buffer might contain more than one XML messages or the first message may be incomplete. In these cases, I get an exception when trying to parse it with XmlReader.Read or XmlDocument.Load, but unfortunately the XmlException does not really allow me to distinguish the problem (except parsing the localized error string). I tried using XmlReader.Read and count the number of Element and EndElement nodes. That way I know when I am finished reading the first, entire XML message. However, there are several problems. If the buffer does not yet contain the entire message, how can I distinguish the XmlException from an actually invalid, non-well-formed message? In other words, if an exception is thrown before reading the first root EndElement, how can I decide whether to abort the connection with error, or to collect more bytes from the TCP stream? If no exception occurs, the XmlReader is positioned at the start of the root EndElement. Casting the XmlReader to IXmlLineInfo gives me the current LineNumber and LinePosition, however it is not straight forward to get the byte position where the EndElement really ends. In order to do that, I would have to convert the byte array into a string (with the encoding specified in the XML declaration), seek to LineNumber,LinePosition and convert that back to the byte offset. I try to do that with StreamReader.ReadLine, but the stream reader gives no public access to the current byte position. All this seams very inelegant and non robust. I wonder if you have ideas for a better solution. Thank you. EDIT: I looked around and think that the situation is as follows (I might be wrong, corrections are welcome): I found no method so that the XmlReader can continue parsing a second XML message (at least not, if the second message has an XmlDeclaration). XmlTextReader.ResetState could do something similar, but for that I would have to assume the same encoding for all messages. Therefor I could not connect the XmlReader directly to the TcpStream. After closing the XmlReader, the buffer is not positioned at the readers last position. So it is not possible to close the reader and use a new one to continue with the next message. I guess the reason for this is, that the reader could not successfully seek on every possible input stream. When XmlReader throws an exception it can not be determined whether it happened because of an premature EOF or because of a non-wellformed XML. XmlReader.EOF is not set in case of an exception. As workaround I derived my own MemoryBuffer, which returns the very last byte as a single byte. This way I know that the XmlReader was really interested in the last byte and the following exception is likely due to a truncated message (this is kinda sloppy, in that it might not detect every non-wellformed message. However, after appending more bytes to the buffer, sooner or later the error will be detected. I could cast my XmlReader to the IXmlLineInfo interface, which gives access to the LineNumber and the LinePosition of the current node. So after reading the first message I remember these positions and use it to truncate the buffer. Here comes the really sloppy part, because I have to use the character encoding to get the byte position. I am sure you could find test cases for the code below where it breaks (e.g. internal elements with mixed encoding). But up to now it worked for all my tests. The parser class follows here -- may it be useful (I know, its very far from perfect...) class XmlParser { private byte[] buffer = new byte[0]; public int Length { get { return buffer.Length; } } // Append new binary data to the internal data buffer... public XmlParser Append(byte[] buffer2) { if (buffer2 != null && buffer2.Length > 0) { // I know, its not an efficient way to do this. // The EofMemoryStream should handle a List<byte[]> ... byte[] new_buffer = new byte[buffer.Length + buffer2.Length]; buffer.CopyTo(new_buffer, 0); buffer2.CopyTo(new_buffer, buffer.Length); buffer = new_buffer; } return this; } // MemoryStream which returns the last byte of the buffer individually, // so that we know that the buffering XmlReader really locked at the last // byte of the stream. // Moreover there is an EOF marker. private class EofMemoryStream: Stream { public bool EOF { get; private set; } private MemoryStream mem_; public override bool CanSeek { get { return false; } } public override bool CanWrite { get { return false; } } public override bool CanRead { get { return true; } } public override long Length { get { return mem_.Length; } } public override long Position { get { return mem_.Position; } set { throw new NotSupportedException(); } } public override void Flush() { mem_.Flush(); } public override long Seek(long offset, SeekOrigin origin) { throw new NotSupportedException(); } public override void SetLength(long value) { throw new NotSupportedException(); } public override void Write(byte[] buffer, int offset, int count) { throw new NotSupportedException(); } public override int Read(byte[] buffer, int offset, int count) { count = Math.Min(count, Math.Max(1, (int)(Length - Position - 1))); int nread = mem_.Read(buffer, offset, count); if (nread == 0) { EOF = true; } return nread; } public EofMemoryStream(byte[] buffer) { mem_ = new MemoryStream(buffer, false); EOF = false; } protected override void Dispose(bool disposing) { mem_.Dispose(); } } // Parses the first xml message from the stream. // If the first message is not yet complete, it returns null. // If the buffer contains non-wellformed xml, it ~should~ throw an exception. // After reading an xml message, it pops the data from the byte array. public Message deserialize() { if (buffer.Length == 0) { return null; } Message message = null; Encoding encoding = Message.default_encoding; //string xml = encoding.GetString(buffer); using (EofMemoryStream sbuffer = new EofMemoryStream (buffer)) { XmlDocument xmlDocument = null; XmlReaderSettings settings = new XmlReaderSettings(); int LineNumber = -1; int LinePosition = -1; bool truncate_buffer = false; using (XmlReader xmlReader = XmlReader.Create(sbuffer, settings)) { try { // Read to the first node (skipping over some element-types. // Don't use MoveToContent here, because it would skip the // XmlDeclaration too... while (xmlReader.Read() && (xmlReader.NodeType==XmlNodeType.Whitespace || xmlReader.NodeType==XmlNodeType.Comment)) { }; // Check for XML declaration. // If the message has an XmlDeclaration, extract the encoding. switch (xmlReader.NodeType) { case XmlNodeType.XmlDeclaration: while (xmlReader.MoveToNextAttribute()) { if (xmlReader.Name == "encoding") { encoding = Encoding.GetEncoding(xmlReader.Value); } } xmlReader.MoveToContent(); xmlReader.Read(); break; } // Move to the first element. xmlReader.MoveToContent(); // Read the entire document. xmlDocument = new XmlDocument(); xmlDocument.Load(xmlReader.ReadSubtree()); } catch (XmlException e) { // The parsing of the xml failed. If the XmlReader did // not yet look at the last byte, it is assumed that the // XML is invalid and the exception is re-thrown. if (sbuffer.EOF) { return null; } throw e; } { // Try to serialize an internal data structure using XmlSerializer. Type type = null; try { type = Type.GetType("my.namespace." + xmlDocument.DocumentElement.Name); } catch (Exception e) { // No specialized data container for this class found... } if (type == null) { message = new Message(); } else { // TODO: reuse the serializer... System.Xml.Serialization.XmlSerializer ser = new System.Xml.Serialization.XmlSerializer(type); message = (Message)ser.Deserialize(new XmlNodeReader(xmlDocument)); } message.doc = xmlDocument; } // At this point, the first XML message was sucessfully parsed. // Remember the lineposition of the current end element. IXmlLineInfo xmlLineInfo = xmlReader as IXmlLineInfo; if (xmlLineInfo != null && xmlLineInfo.HasLineInfo()) { LineNumber = xmlLineInfo.LineNumber; LinePosition = xmlLineInfo.LinePosition; } // Try to read the rest of the buffer. // If an exception is thrown, another xml message appears. // This way the xml parser could tell us that the message is finished here. // This would be prefered as truncating the buffer using the line info is sloppy. try { while (xmlReader.Read()) { } } catch { // There comes a second message. Needs workaround for trunkating. truncate_buffer = true; } } if (truncate_buffer) { if (LineNumber < 0) { throw new Exception("LineNumber not given. Cannot truncate xml buffer"); } // Convert the buffer to a string using the encoding found before // (or the default encoding). string s = encoding.GetString(buffer); // Seek to the line. int char_index = 0; while (--LineNumber > 0) { // Recognize \r , \n , \r\n as newlines... char_index = s.IndexOfAny(new char[] {'\r', '\n'}, char_index); // char_index should not be -1 because LineNumber>0, otherwise an RangeException is // thrown, which is appropriate. char_index++; if (s[char_index-1]=='\r' && s.Length>char_index && s[char_index]=='\n') { char_index++; } } char_index += LinePosition - 1; var rgx = new System.Text.RegularExpressions.Regex(xmlDocument.DocumentElement.Name + "[ \r\n\t]*\\>"); System.Text.RegularExpressions.Match match = rgx.Match(s, char_index); if (!match.Success || match.Index != char_index) { throw new Exception("could not find EndElement to truncate the xml buffer."); } char_index += match.Value.Length; // Convert the character offset back to the byte offset (for the given encoding). int line1_boffset = encoding.GetByteCount(s.Substring(0, char_index)); // remove the bytes from the buffer. buffer = buffer.Skip(line1_boffset).ToArray(); } else { buffer = new byte[0]; } } return message; } }

    Read the article

  • Records not being saved to core data sqlite file

    - by esd100
    I'm a complete newbie when it comes to iOS programming and much less Core Data. It's rather non-intuitive for me, as I really came into my own with programming with MATLAB, which I guess is more of a 'scripting' language. At any rate, my problem is that I had no idea what I had to do to create a database for my application. So I read a little bit and thought I had to create a SQL database of my stuff and then import it. Long story short, I created a SQLite db and I want to use the work I have already done to import stuff into my CoreData database. I tried exporting to comma-delimited files and xml files and reading those in, but I didn't like it and it seemed like an extra step that I shouldn't need to do. So, I imported the SQLite database into my resources and added the sqlite framework. I have my core data model setup and it is setting up the SQLite database for the model correctly in the background. When I run through my program to add objects to my entities, it seems to work and I can even fetch results afterward. However, when I inspect the Core Data Database SQLite file, no records have been saved. How is it possible for it to fetch results but not save them to the database? - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions{ //load in the path for resources NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *databaseName = @"histology.sqlite"; NSString *databasePath = [paths stringByAppendingPathComponent:databaseName]; [self createDatabase:databasePath ]; NSError *error; if ([[self managedObjectContext] save:&error]) { NSLog(@"Whoops, couldn't save: %@", [error localizedDescription]); } // Test listing all CELLS from the store NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entityMO = [NSEntityDescription entityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; [fetchRequest setEntity:entityMO]; NSArray *fetchedObjects = [[self managedObjectContext] executeFetchRequest:fetchRequest error:&error]; for (CELL *cellName in fetchedObjects) { //NSLog(@"cellName: %@", cellName); } -(void) createDatabase:databasePath { NSLog(@"The createDatabase function was entered."); NSLog(@"The databasePath is %@ ",[databasePath description]); // Setup the database object sqlite3 *histoDatabase; // Open the database from filessytem if(sqlite3_open([databasePath UTF8String], &histoDatabase) == SQLITE_OK) { NSLog(@"The database was opened"); // Setup the SQL Statement and compile it for faster access const char *sqlStatement = "SELECT * FROM CELL"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) != SQLITE_OK) { NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(histoDatabase)); } if(sqlite3_prepare_v2(histoDatabase, sqlStatement, -1, &compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to cell MO array while(sqlite3_step(compiledStatement) == SQLITE_ROW) { CELL *cellMO = [NSEntityDescription insertNewObjectForEntityForName:@"CELL" inManagedObjectContext:[self managedObjectContext]]; if (sqlite3_column_type(compiledStatement, 0) != SQLITE_NULL) { cellMO.cellName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 0)]; } else { cellMO.cellName = @"undefined"; } if (sqlite3_column_type(compiledStatement, 1) != SQLITE_NULL) { cellMO.cellDescription = [NSString stringWithUTF8String:(char *)sqlite3_column_text(compiledStatement, 1)]; } else { cellMO.cellDescription = @"undefined"; } NSLog(@"The contents of NSString *cellName = %@",[cellMO.cellName description]); } } // Release the compiled statement from memory sqlite3_finalize(compiledStatement); } sqlite3_close(histoDatabase); } I have a feeling that it has something to do with the timing of opening/closing both of the databases? Attached I have some SQL debugging output to the terminal 2012-05-28 16:03:39.556 MedPix[34751:fb03] The createDatabase function was entered. 2012-05-28 16:03:39.557 MedPix[34751:fb03] The databasePath is /Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/MedPix.app/histology.sqlite 2012-05-28 16:03:39.559 MedPix[34751:fb03] The database was opened 2012-05-28 16:03:39.560 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.575 MedPix[34751:fb03] CoreData: annotation: Connecting to sqlite database file at "/Users/jack/Library/Application Support/iPhone Simulator/5.1/Applications/A6B2A79D-BA93-4E24-9291-5B7948A3CDF4/Documents/MedPix.sqlite" 2012-05-28 16:03:39.576 MedPix[34751:fb03] CoreData: annotation: creating schema. 2012-05-28 16:03:39.577 MedPix[34751:fb03] CoreData: sql: pragma page_size=4096 2012-05-28 16:03:39.578 MedPix[34751:fb03] CoreData: sql: pragma auto_vacuum=2 2012-05-28 16:03:39.630 MedPix[34751:fb03] CoreData: sql: BEGIN EXCLUSIVE 2012-05-28 16:03:39.631 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.632 MedPix[34751:fb03] CoreData: sql: CREATE TABLE ZCELL ( Z_PK INTEGER PRIMARY KEY, Z_ENT INTEGER, Z_OPT INTEGER, ZCELLDESCRIPTION VARCHAR, ZCELLNAME VARCHAR ) ... 2012-05-28 16:03:39.669 MedPix[34751:fb03] CoreData: annotation: Creating primary key table. 2012-05-28 16:03:39.671 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_PRIMARYKEY (Z_ENT INTEGER PRIMARY KEY, Z_NAME VARCHAR, Z_SUPER INTEGER, Z_MAX INTEGER) 2012-05-28 16:03:39.672 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_PRIMARYKEY(Z_ENT, Z_NAME, Z_SUPER, Z_MAX) VALUES(1, 'CELL', 0, 0) ... 2012-05-28 16:03:39.701 MedPix[34751:fb03] CoreData: sql: CREATE TABLE Z_METADATA (Z_VERSION INTEGER PRIMARY KEY, Z_UUID VARCHAR(255), Z_PLIST BLOB) 2012-05-28 16:03:39.702 MedPix[34751:fb03] CoreData: sql: SELECT TBL_NAME FROM SQLITE_MASTER WHERE TBL_NAME = 'Z_METADATA' 2012-05-28 16:03:39.703 MedPix[34751:fb03] CoreData: sql: DELETE FROM Z_METADATA WHERE Z_VERSION = ? 2012-05-28 16:03:39.704 MedPix[34751:fb03] CoreData: sql: INSERT INTO Z_METADATA (Z_VERSION, Z_UUID, Z_PLIST) VALUES (?, ?, ?) 2012-05-28 16:03:39.705 MedPix[34751:fb03] CoreData: sql: COMMIT 2012-05-28 16:03:39.710 MedPix[34751:fb03] CoreData: sql: pragma cache_size=200 2012-05-28 16:03:39.711 MedPix[34751:fb03] CoreData: sql: SELECT Z_VERSION, Z_UUID, Z_PLIST FROM Z_METADATA 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Beta Cell 2012-05-28 16:03:39.712 MedPix[34751:fb03] The contents of NSString *cellName = Gastric Chief Cell ... 2012-05-28 16:03:39.714 MedPix[34751:fb03] The database was prepared 2012-05-28 16:03:39.764 MedPix[34751:fb03] The createDatabase function has finished. Now fetching. 2012-05-28 16:03:39.765 MedPix[34751:fb03] CoreData: sql: SELECT 0, t0.Z_PK, t0.Z_OPT, t0.ZCELLDESCRIPTION, t0.ZCELLNAME FROM ZCELL t0 2012-05-28 16:03:39.766 MedPix[34751:fb03] CoreData: annotation: sql connection fetch time: 0.0008s 2012-05-28 16:03:39.767 MedPix[34751:fb03] CoreData: annotation: total fetch execution time: 0.0016s for 0 rows. 2012-05-28 16:03:39.768 MedPix[34751:fb03] cellName: <CELL: 0x6bbc120> (entity: CELL; id: 0x6bbc160 <x-coredata:///CELL/t57D10DDD-74E2-474F-97EE-E3BD0FF684DA34> ; data: { cellDescription = "S cells are cells which release secretin, found in the jejunum and duodenum. They are stimulated by a drop in pH to 4 or below in the small intestine's lumen. The released secretin will increase the s"; cellName = "S Cell"; organs = ( ); specimens = ( ); systems = ( ); tissues = ( ); }) ... Sections were cut short to abbreviate. But note that the fetch results contain information, but it says that total fetch execution was for "0" rows? How can that be? Any help will be greatly appreciated, especially detailed explanations. :) Thanks.

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • How do I send automated e-mails from Drupal using Messaging and Notifications?

    - by Adrian
    I am working on a Notifications plugin, and after starting to write my notes down about how to do this, decided to just post them here. Please feel free to come make modifications and changes. Eventually I hope to post this on the Drupal handbook as well. Thanks. --Adrian Sending automated e-mails from Drupal using Messaging and Notifications To implement a notifications plugin, you must implement the following functions: Use hook_messaging, hook_token_list and hook_token_values to create the messages that will be sent. Use hook_notifications to create the subscription types Add code to fire events (eg in hook_nodeapi) Add all UI elements to allow users to subscribe/unsubscribe Understanding Messaging The Messaging module is used to compose messages that can be delivered using various formats, such as simple mail, HTML mail, Twitter updates, etc. These formats are called "send methods." The backend details do not concern us here; what is important are the following concepts: TOKENS: tokens are provided by the "tokens" module. They allow you to write keywords in square brackets, [like-this], that can be replaced by any arbitrary value. Note: the token groups you create must match the keys you add to the $events-objects[$key] array. MESSAGE KEYS: A key is a part of a message, such as the greetings line. Keys can be different for each send method. For example, a plaintext mail's greeting might be "Hi, [user]," while an HTML greeing might be "Hi, [user]," and Twitter's might just be "[user-firstname]: ". Keys can have any arbitrary name. Keys are very simple and only have a machine-readable name and a user-readable description, the latter of which is only seen by admins. MESSAGE GROUPS: A group is a bunch of keys that often, but not always, might be used together to make up a complete message. For example, a generic group might include keys for a greeting, body, closing and footer. Groups can also be "subclassed" by selecting a "fallback" group that will supply any keys that are missing. Groups are also associated with modules; I'm not sure what these are used for. Understanding Notifications The Notifications module revolves around the following concepts: SUBSCRIPTIONS: Notifications plugins may define one or more types of subscriptions. For example, notifications_content defines subscriptions for: Threads (users are notified whenever a node or its comments change) Content types (users are notified whenever a node of a certain type is created or is changed) Users (users are notified whenever another user is changed) Subscriptions refer to both the user who's subscribed, how often they wish to be notified, the send method (for Messaging) and what's being subscribed to. This last part is defined in two steps. Firstly, a plugin defines several "subscription fields" (through a hook_notifications op of the same name), and secondly, "subscription types" (also an op) defines which fields apply to each type of subscription. For example, notifications_content defines the fields "nid," "author" and "type," and the subscriptions "thread" (nid), "nodetype" (type), "author" (author) and "typeauthor" (type and author), the latter referring to something like "any STORY by JOE." Fields are used to link events to subscriptions; an event must match all fields of a subscription (for all normal subscriptions) to be delivered to the recipient. The $subscriptions object is defined in subsequent sections. Notifications prefers that you don't create these objects yourself, preferring you to call the notifications_get_link() function to create a link that users may click on, but you can also use notifications_save_subscription and notifications_delete_subscription to do it yourself. EVENTS: An event is something that users may be notified about. Plugins create the $event object then call notifications_event($event). This either sends out notifications immediately, queues them to send out later, or both. Events include the type of thing that's changed (eg 'node', 'user'), the ID of the thing that's changed (eg $node-nid, $user-uid) and what's happened to it (eg 'create'). These are, respectively, $event-type, $event-oid (object ID) and $event-action. Warning: notifications_content_nodeapi also adds a $event-node field, referring to the node itself and not just $event-oid = $node-nid. This is not used anywhere in the core notifications module; however, when the $event is passed back to the 'query' op (see below), we assume the node is still present. Events do not refer to the user they will be referred to; instead, Notifications makes the connection between subscriptions and events, using the subscriptions' fields. MATCHING EVENTS TO SUBSCRIPTIONS: An event matches a subscription if it has the same type as the event (eg "node") and if the event matches all the correct fields. This second step is determined by the "query" hook op, which is called with the $event object as a parameter. The query op is responsible for giving Notifications a value for all the fields defined by the plugin. For example, notifications_content defines the 'nid', 'type' and 'author' fields, so its query op looks like this (ignore the case where $event_or_user = 'user' for now): $event_or_user = $arg0; $event_type = $arg1; $event_or_object = $arg2; if ($event_or_user == 'event' && $event_type == 'node' && ($node = $event_or_object->node) || $event_or_user == 'user' && $event_type == 'node' && ($node = $event_or_object)) { $query[]['fields'] = array( 'nid' => $node->nid, 'type' => $node->type, 'author' => $node->uid, ); return $query; After extracting the $node from the $event, we set $query[]['fields'] to a dictionary defining, for this event, all the fields defined by the module. As you can tell from the presence of the $query object, there's way more you can do with this op, but they are not covered here. DIGESTING AND DEDUPING: Understanding the relationship between Messaging and Notifications Usually, the name of a message group doesn't matter, but when being used with Notifications, the names must follow very strict patterns. Firstly, they must start with the name "notifications," and then are followed by either "event" or "digest," depending on whether the message group is being used to represent either a single event or a group of events. For 'events,' the third part of the name is the "type," which we get from Notification's $event-type (eg: notifications_content uses 'node'). The last part of the name is the operation being performed, which comes from Notification's $event-action. For example: notifications-event-node-comment might refer to the message group used when someone comments on a node notifications-event-user-update to a user who's updated their profile Hyphens cannot appear anywhere other than to separate the parts of these words. For 'digest' messages, the third and fourth part of the name come from hook_notification's "event types" callback, specifically this line: $types[] = array( 'type' => 'node', 'action' => 'insert', ... 'digest' => array('node', 'type'), ); $types[] = array( 'type' => 'node', 'action' => 'update', ... 'digest' => array('node', 'nid'), ); In this case, the first event type (node insertion) will be digested with the notifications-digest-node-type message template providing the header and footer, likely saying something like "the following [type] was created." The second event type (node update) will be digested with the notifications-digest-node-nid message template. Data Structure and Callback Reference $event The $event object has the following members: $event-type: The type of event. Must match the type in hook_notification::"event types". {notifications_event} $event-action: The action the event describes. Most events are sorted by [$event-type][$event-action]. {notifications_event}. $event-object[$object_type]: All objects relevant to the event. For example, $event-object['node'] might be the node that the event describes. $object_type can come from the 'event types' hook (see below). The main purpose appears to be to be passed to token_replace_multiple as the second parameter. $event-object[$event-type] is assumed to exist in the short digest processing functions, but this doesn't appear to be used anywhere. Not saved in the database; loaded by hook_notifications::"event load" $event-oid: apparently unused. The id of the primary object relevant to this event (eg the node's nid). $event-module: apparently unused $event-params[$key]: Mainly a place for plugins to save random data. The main module will serialize the contents of this array but does not use it in any way. However, notifications_ui appears to do something weird with it, possibly by using subscriptions' fields as keys into this array. I'm not sure why though. hook_notifications op 'subscription types': returns an array of subscription types provided by the plugin, in the form $key = array(...) with the following members: event_type: this subscription can only match events whose $event-type has this value. Stored in the database as notifications.event_type for every individual subscription. Apparently, this can be overiden in code but I wouldn't try it (see notifications_save_subscription). fields: an unkeyed array of fields that must be matched by an event (in addition to the event_type) for it to match this subscription. Each element of this array must be a key of the array returned by op 'subscription fields' which in turn must be used by op 'query' to actually perform the matching. title: user-readable title for their subscriptions page (eg the 'type' column in user/%uid/notifications/subscriptions) description: a user-readable description. page callback: used to add a supplementary page at user/%uid/notifications/blah. This and the following are used by notifications_ui as a part of hook_menu_alter. Appears to be partially deprecated. user page: user/%uid/notifications/blah. op 'event types': returns an array of event types, with each event type being an array with the following members: type: this will match $event-type action: this will match $event-action digest: an array with two ordered (non-keyed) elements, "type" and "field." 'type' is used as an index into $event-objects. 'field' is also used to group events like so: $event-objects[$type]-$field. For example, 'field' might be 'nid' - if the object is a node, the digest lines will be grouped by node ID. Finally, both are used to find the correct Messaging template; see discussion above. description: used on the admin "Notifications-Events" page name: unused, use Messaging instead line: deprecated, use Messaging instead Other Stuff This is an example of the main query that inserts an event into the queue: INSERT INTO {notifications_queue} (uid, destination, sid, module, eid, send_interval, send_method, cron, created, conditions) SELECT DISTINCT s.uid, s.destination, s.sid, s.module, %d, // event ID s.send_interval, s.send_method, s.cron, %d, // time of the event s.conditions FROM {notifications} s INNER JOIN {notifications_fields} f ON s.sid = f.sid WHERE (s.status = 1) AND (s.event_type = '%s') // subscription type AND (s.send_interval >= 0) AND (s.uid <> %d) AND ( (f.field = '%s' AND f.intval IN (%d)) // everything from 'query' op OR (f.field = '%s' AND f.intval = %d) OR (f.field = '%s' AND f.value = '%s') OR (f.field = '%s' AND f.intval = %d)) GROUP BY s.uid, s.destination, s.sid, s.module, s.send_interval, s.send_method, s.cron, s.conditions HAVING s.conditions = count(f.sid)

    Read the article

  • ubuntu: sem_timedwait not waking (C)

    - by gillez
    I have 3 processes which need to be synchronized. Process one does something then wakes process two and sleeps, which does something then wakes process three and sleeps, which does something and wakes process one and sleeps. The whole loop is timed to run around 25hz (caused by an external sync into process one before it triggers process two in my "real" application). I use sem_post to trigger (wake) each process, and sem_timedwait() to wait for the trigger. This all works successfully for several hours. However at some random time (usually after somewhere between two and four hours), one of the processes starts timing out in sem_timedwait(), even though I am sure the semaphore is being triggered with sem_post(). To prove this I even use sem_getvalue() immediately after the timeout, and the value is 1, so the timedwait should have been triggered. Please see following code: #include <stdio.h> #include <time.h> #include <string.h> #include <errno.h> #include <semaphore.h> sem_t trigger_sem1, trigger_sem2, trigger_sem3; // The main thread process. Called three times with a different num arg - 1, 2 or 3. void *thread(void *arg) { int num = (int) arg; sem_t *wait, *trigger; int val, retval; struct timespec ts; struct timeval tv; switch (num) { case 1: wait = &trigger_sem1; trigger = &trigger_sem2; break; case 2: wait = &trigger_sem2; trigger = &trigger_sem3; break; case 3: wait = &trigger_sem3; trigger = &trigger_sem1; break; } while (1) { // The first thread delays by 40ms to time the whole loop. // This is an external sync in the real app. if (num == 1) usleep(40000); // print sem value before we wait. If this is 1, sem_timedwait() will // return immediately, otherwise it will block until sem_post() is called on this sem. sem_getvalue(wait, &val); printf("sem%d wait sync sem%d. val before %d\n", num, num, val); // get current time and add half a second for timeout. gettimeofday(&tv, NULL); ts.tv_sec = tv.tv_sec; ts.tv_nsec = (tv.tv_usec + 500000); // add half a second if (ts.tv_nsec > 1000000) { ts.tv_sec++; ts.tv_nsec -= 1000000; } ts.tv_nsec *= 1000; /* convert to nanosecs */ retval = sem_timedwait(wait, &ts); if (retval == -1) { // timed out. Print value of sem now. This should be 0, otherwise sem_timedwait // would have woken before timeout (unless the sem_post happened between the // timeout and this call to sem_getvalue). sem_getvalue(wait, &val); printf("!!!!!! sem%d sem_timedwait failed: %s, val now %d\n", num, strerror(errno), val); } else printf("sem%d wakeup.\n", num); // get value of semaphore to trigger. If it's 1, don't post as it has already been // triggered and sem_timedwait on this sem *should* not block. sem_getvalue(trigger, &val); if (val <= 0) { printf("sem%d send sync sem%d. val before %d\n", num, (num == 3 ? 1 : num+1), val); sem_post(trigger); } else printf("!! sem%d not sending sync, val %d\n", num, val); } } int main(int argc, char *argv[]) { pthread_t t1, t2, t3; // create semaphores. val of sem1 is 1 to trigger straight away and start the whole ball rolling. if (sem_init(&trigger_sem1, 0, 1) == -1) perror("Error creating trigger_listman semaphore"); if (sem_init(&trigger_sem2, 0, 0) == -1) perror("Error creating trigger_comms semaphore"); if (sem_init(&trigger_sem3, 0, 0) == -1) perror("Error creating trigger_vws semaphore"); pthread_create(&t1, NULL, thread, (void *) 1); pthread_create(&t2, NULL, thread, (void *) 2); pthread_create(&t3, NULL, thread, (void *) 3); pthread_join(t1, NULL); pthread_join(t2, NULL); pthread_join(t3, NULL); } The following output is printed when the program is running correctly (at the start and for a random but long time after). The value of sem1 is always 1 before thread1 waits as it sleeps for 40ms, by which time sem3 has triggered it, so it wakes straight away. The other two threads wait until the semaphore is received from the previous thread. [...] sem1 wait sync sem1. val before 1 sem1 wakeup. sem1 send sync sem2. val before 0 sem2 wakeup. sem2 send sync sem3. val before 0 sem2 wait sync sem2. val before 0 sem3 wakeup. sem3 send sync sem1. val before 0 sem3 wait sync sem3. val before 0 sem1 wait sync sem1. val before 1 sem1 wakeup. sem1 send sync sem2. val before 0 [...] However, after a few hours, one of the threads begins to timeout. I can see from the output that the semaphore is being triggered, and when I print the value after the timeout is is 1. So sem_timedwait should have woken up well before the timeout. I would never expect the value of the semaphore to be 1 after the timeout, save for the very rare occasion (almost certainly never but it's possible) when the trigger happens after the timeout but before I call sem_getvalue. Also, once it begins to fail, every sem_timedwait() on that semaphore also fails in the same way. See the following output, which I've line-numbered: 01 sem3 wait sync sem3. val before 0 02 sem1 wakeup. 03 sem1 send sync sem2. val before 0 04 sem2 wakeup. 05 sem2 send sync sem3. val before 0 06 sem2 wait sync sem2. val before 0 07 sem1 wait sync sem1. val before 0 08 !!!!!! sem3 sem_timedwait failed: Connection timed out, val now 1 09 sem3 send sync sem1. val before 0 10 sem3 wait sync sem3. val before 1 11 sem3 wakeup. 12 !! sem3 not sending sync, val 1 13 sem3 wait sync sem3. val before 0 14 sem1 wakeup. [...] On line 1, thread 3 (which I have confusingly called sem1 in the printf) waits for sem3 to be triggered. On line 5, sem2 calls sem_post for sem3. However, line 8 shows sem3 timing out, but the value of the semaphore is 1. thread3 then triggers sem1 and waits again (10). However, because the value is already 1, it wakes straight away. It doesn't send sem1 again as this has all happened before control is given to thread1, however it then waits again (val is now 0) and sem1 wakes up. This now repeats for ever, sem3 always timing out and showing that the value is 1. So, my question is why does sem3 timeout, even though the semaphore has been triggered and the value is clearly 1? I would never expect to see line 08 in the output. If it times out (because, say thread 2 has crashed or is taking too long), the value should be 0. And why does it work fine for 3 or 4 hours first before getting into this state? This is using Ubuntu 9.4 with kernel 2.6.28. The same procedure has been working properly on Redhat and Fedora. But I'm now trying to port to ubuntu! Thanks for any advice, Giles

    Read the article

  • MySQL is running VERY slow

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info [root@vps ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 1449966 write: 0 other: 207138 total: 1657104 transactions: 103569 (1726.01 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 1449966 (24164.08 per sec.) other operations: 207138 (3452.01 per sec.) Test execution summary: total time: 60.0050s total number of events: 103569 total time taken by event execution: 479.1544 per-request statistics: min: 1.98ms avg: 4.63ms max: 330.73ms approx. 95 percentile: 8.26ms Threads fairness: events (avg/stddev): 12946.1250/381.09 execution time (avg/stddev): 59.8943/0.00 [root@vps ~]# Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]#

    Read the article

  • Postgres cannot connect to server

    - by user1408935
    Super stumped by why Postgres isn't working on a new app I just started. I've got it working for one app already. I'm using postgres.app, and it's running. I started a new app with rails new depot -d postgresql and then I went into the database.yml file and changed username to my $USER (which is what it is for the other app, which is working). So now my database.yml file has this development section: development: adapter: postgresql encoding: unicode database: depot_development pool: 5 username: <username> password: But when I run "rake db:create" or "rake db:create:all" I still got this error (in full, cause I don't know what's relevant): Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_development", "pool"=>5, "username"=>"<username>", "password"=>nil} could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:1213:in `connect' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:329:in `initialize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `new' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/postgresql_adapter.rb:28:in `postgresql_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:309:in `new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:319:in `checkout_new_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:241:in `block (2 levels) in checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `loop' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:236:in `block in checkout' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:233:in `checkout' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in `block in connection' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:95:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_pool.rb:404:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:170:in `retrieve_connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/connection_adapters/abstract/connection_specification.rb:144:in `connection' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:107:in `rescue in create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:51:in `create_database' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (3 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/gems/activerecord-3.2.8/lib/active_record/railties/databases.rake:40:in `block (2 levels) in <top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `call' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:205:in `block in execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:200:in `execute' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:158:in `block in invoke_with_call_chain' /Users/<username>/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/monitor.rb:211:in `mon_synchronize' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:151:in `invoke_with_call_chain' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/task.rb:144:in `invoke' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:116:in `invoke_task' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block (2 levels) in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `each' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:94:in `block in top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:88:in `top_level' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:66:in `block in run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:133:in `standard_exception_handling' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/lib/rake/application.rb:63:in `run' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/gems/rake-0.9.2.2/bin/rake:33:in `<top (required)>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `load' /Users/<username>/.rvm/gems/ruby-1.9.3-p194@global/bin/rake:19:in `<main>' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `eval' /Users/<username>/.rvm/gems/ruby-1.9.3-p194/bin/ruby_noexec_wrapper:14:in `<main>' Couldn't create database for {"adapter"=>"postgresql", "encoding"=>"unicode", "database"=>"depot_test", "pool"=>5, "username"=>"<username>", "password"=>nil} I have tried createdb depot_development I have tried going into the psql environment and listing users (which included my username among them). In the same psql environment, I tried CREATE DATABASE depot; I've made sure that the pg gem is installed with bundle install, I've run "pg_ctl start", to which I got this response: pg_ctl: no database directory specified and environment variable PGDATA unset I ran "ps aux | grep postgres" to make sure postgres was running, to which I got this in return (which looks like it's doing OK, right?): <username> 10390 0.4 0.0 2425480 180 s000 R+ 6:15PM 0:00.00 grep postgres <username> 2907 0.0 0.0 2441604 464 ?? Ss 6:17PM 0:02.31 postgres: stats collector process <username> 2906 0.0 0.0 2445520 1664 ?? Ss 6:17PM 0:02.33 postgres: autovacuum launcher process <username> 2905 0.0 0.0 2445388 600 ?? Ss 6:17PM 0:09.25 postgres: wal writer process <username> 2904 0.0 0.0 2445388 1252 ?? Ss 6:17PM 0:12.08 postgres: writer process <username> 2902 0.0 0.0 2445388 3688 ?? S 6:17PM 0:00.54 /Applications/Postgres.app/Contents/MacOS/bin/postgres -D /Users/<username>/Library/Application Support/Postgres/var -p5432 The short of it, is I've been troubleshooting for a WHILE and have NO idea what's wrong. Any ideas? I'd really appreciate it, cause I'm pretty new to Rails, and this is a pretty disheartening roadblock. Thanks! EDIT -- Per request, posting the successful database.yml . It seems the difference is the inclusion of a password: development: adapter: postgresql encoding: unicode database: *******_development pool: 5 username: ******* password: ******* EDIT2 -- When I add a password to the .yml file, then run rake db:create again, I get this error. rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb)

    Read the article

  • How to implement Survey page using ASP.NET MVC?

    - by Aleks
    I need to implement the survey page using ASP.NET MVC (v.4) That functionality has already been implemented in our project using ASP.NET WebForms. (I really searched a lot for real examples of similar functionality implemented via MVC, but failed) Goal: staying on the same page (in webforms -'Survey.aspx') each time user clicks 'Next Page', load next bunch of questions (controls) which user is going to answer. Type of controls in questions are defined only in run-time (retrieved from Data Base). To explain better the question I manually created (rather simple) mark-up below of 'two' pages (two loads of controls): <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Survey.aspx.cs" Inherits="WebSite.Survey" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div><h2>Internal Survey</h2></div> <div><h3>Page 1</h3></div> <div style="padding-bottom: 10px"><div><b>Did you have internet disconnections during last week?</b></div> <asp:RadioButtonList ID="RadioButtonList1" runat="server"> <asp:ListItem>Yes</asp:ListItem> <asp:ListItem>No</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"><div><b>Which days of the week suit you best for meeting up ?</b></div> <asp:CheckBoxList ID="CheckBoxList1" runat="server"> <asp:ListItem>Monday</asp:ListItem> <asp:ListItem>Tuesday</asp:ListItem> <asp:ListItem>Wednesday</asp:ListItem> <asp:ListItem>Thursday</asp:ListItem> <asp:ListItem>Friday</asp:ListItem> </asp:CheckBoxList> </div> <div style="padding-bottom: 10px"> <div><b>How satisfied are you with your job? </b></div> <asp:RadioButtonList ID="RadioButtonList2" runat="server"> <asp:ListItem>Very Good</asp:ListItem> <asp:ListItem>Good</asp:ListItem> <asp:ListItem>Bad</asp:ListItem> <asp:ListItem>Very Bad</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"> <div><b>How satisfied are you with your direct supervisor ? </b></div> <asp:RadioButtonList ID="RadioButtonList3" runat="server"> <asp:ListItem>Not Satisfied</asp:ListItem> <asp:ListItem>Somewhat Satisfied</asp:ListItem> <asp:ListItem>Neutral</asp:ListItem> <asp:ListItem>Satisfied</asp:ListItem> <asp:ListItem>Very Satisfied</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"> <asp:Button ID="Button1" runat="server" Text="Next Page" onclick="Button1_Click" /> </div> </form> </body> </html> PAGE 2 <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Survey.aspx.cs" Inherits="WebSite.Survey" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div><h2>Internal Survey</h2></div> <div><h3>Page 2</h3></div> <div style="padding-bottom: 10px"><div><b>Did admininstators fix your internet connection in time ?</b></div> <asp:RadioButtonList ID="RadioButtonList1" runat="server"> <asp:ListItem>Yes</asp:ListItem> <asp:ListItem>No</asp:ListItem> </asp:RadioButtonList> </div> <div style="padding-bottom: 10px"><div><b>What's your overal impression about the job ?</b></div> <asp:TextBox ID="TextBox1" runat="server" Height="88px" Width="322px"></asp:TextBox> </div> <div style="padding-bottom: 10px"> <div><b>Select day which best suits you for admin support ? </b></div> <asp:DropDownList ID="DropDownList1" runat="server"> <asp:ListItem>Select day</asp:ListItem> <asp:ListItem>Monday</asp:ListItem> <asp:ListItem>Wednesday</asp:ListItem> <asp:ListItem>Friday</asp:ListItem> </asp:DropDownList> </div> <div style="padding-bottom: 10px"> <asp:Button ID="Button1" runat="server" Text="Next Page" onclick="Button1_Click" /> </div> </form> </body> </html>

    Read the article

  • Need guidance on a Google Map application that has to show 250 000 polylines.

    - by lucian.jp
    I am looking for advice for an application I am developing that uses Google Map. Summary: A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over. Information: I have a database of street segments with IDs, polyline points and centroid. The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show. Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008. We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application. Performance is a priority. Problems: Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal. There is no available .NET wrapper for the API v3 yet. Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.) Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used. Some thoughts: KML/KMZ: Pros: Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same. Cons: LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time. JavaScript file: Pros: JavaScript file can have encoded polyline and would significantly reduce the file to transfer. Cons: Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source. GeoRSS: This option isn't adapted for my needs I think, but I could be wrong. MapServer: I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without. Monopoly City Streets: The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city. Clustering: While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”. Arrow: I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays. Criteria: While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason. Questions: Should I create a web service to returns the street segments file or create an aspx page that return the file? Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay? Will I be able to display simple arrow on the polyline on the next version. In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream? This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

    Read the article

  • socket operation on nonsocket or bad file descriptor

    - by Magn3s1um
    I'm writing a pthread server which takes requests from clients and sends them back a bunch of .ppm files. Everything seems to go well, but sometimes when I have just 1 client connected, when trying to read from the file descriptor (for the file), it says Bad file Descriptor. This doesn't make sense, since my int fd isn't -1, and the file most certainly exists. Other times, I get this "Socket operation on nonsocket" error. This is weird because other times, it doesn't give me this error and everything works fine. When trying to connect multiple clients, for some reason, it will only send correctly to one, and then the other client gets the bad file descriptor or "nonsocket" error, even though both threads are processing the same messages and do the same routines. Anyone have an idea why? Here's the code that is giving me that error: while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); The messages for both threads are the same, being of the form ./path/imageXX.ppm where XX is the number that should go to the client. The file size of each image is 58368 bytes. Sometimes, this code hangs on the read, and stops execution. I don't know this would be either, because the file descriptor comes back as valid. Thanks in advanced. Edit: Here's some sample output: Sending to client a: ./support/images/sw90.ppm This is fd 4 Error: : Socket operation on non-socket Sending to client a: ./support/images/sw91.ppm This is fd 4 Error: : Socket operation on non-socket Sending ./support/images/sw92.ppm This is fd 4 I am hhere2 Error: : Socket operation on non-socket My dispatcher has defeated evil Sample with 2 clients (client b was serviced first) Sending to client b: ./support/images/sw87.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw88.ppm This is fd 6 Error: : Success Sending to client b: ./support/images/sw89.ppm This is fd 6 Error: : Success This is fd 6 Error: : Bad file descriptor Sending to client a: ./support/images/sw85.ppm This is fd 6 Error: As you can see, who ever is serviced first in this instance can open the files, but not the 2nd person. Edit2: Full code. Sorry, its pretty long and terribly formatted. #include <netinet/in.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> #include <sys/types.h> #include <sys/socket.h> #include <errno.h> #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include "ring.h" /* Version 1 Here is what is implemented so far: The threads are created from the arguments specified (number of threads that is) The server will lock and update variables based on how many clients are in the system and such. The socket that is opened when a new client connects, must be passed to the threads. To do this, we need some sort of global array. I did this by specifying an int client and main_pool_busy, and two pointers poolsockets and nonpoolsockets. My thinking on this was that when a new client enters the system, the server thread increments the variable client. When a thread is finished with this client (after it sends it the data), the thread will decrement client and close the socket. HTTP servers act this way sometimes (they terminate the socket as soon as one transmission is sent). *Note down at bottom After the server portion increments the client counter, we must open up a new socket (denoted by new_sd) and get this value to the appropriate thread. To do this, I created global array poolsockets, which will hold all the socket descriptors for our pooled threads. The server portion gets the new socket descriptor, and places the value in the first spot of the array that has a 0. We only place a value in this array IF: 1. The variable main_pool_busy < worknum (If we have more clients in the system than in our pool, it doesn't mean we should always create a new thread. At the end of this, the server signals on the condition variable clientin that a new client has arrived. In our pooled thread, we then must walk this array and check the array until we hit our first non-zero value. This is the socket we will give to that thread. The thread then changes the array to have a zero here. What if our all threads in our pool our busy? If this is the case, then we will know it because our threads in this pool will increment main_pool_busy by one when they are working on a request and decrement it when they are done. If main_pool_busy >= worknum, then we must dynamically create a new thread. Then, we must realloc the size of our nonpoolsockets array by 1 int. We then add the new socket descriptor to our pool. Here's what we need to figure out: NOTE* Each worker should generate 100 messages which specify the worker thread ID, client socket descriptor and a copy of the client message. Additionally, each message should include a message number, starting from 0 and incrementing for each subsequent message sent to the same client. I don't know how to keep track of how many messages were to the same client. Maybe we shouldn't close the socket descriptor, but rather keep an array of structs for each socket that includes how many messages they have been sent. Then, the server adds the struct, the threads remove it, then the threads add it back once they've serviced one request (unless the count is 100). ------------------------------------------------------------- CHANGES Version 1 ---------- NONE: this is the first version. */ #define MAXSLOTS 30 #define dis_m 15 //problems with dis_m ==1 //Function prototypes void inc_clients(); void init_mutex_stuff(pthread_t*, pthread_t*); void *threadpool(void *); void server(int); void add_to_socket_pool(int); void inc_busy(); void dec_busy(); void *dispatcher(); void create_message(long, int, int, char *, char *); void init_ring(); void add_to_ring(char *, char *, int, int, int); int socket_from_string(char *); void add_to_head(char *); void add_to_tail(char *); struct message * reorder(struct message *, struct message *, int); int get_threadid(char *); void delete_socket_messages(int); struct message * merge(struct message *, struct message *, int); int get_request(char *, char *, char*); ///////////////////// //Global mutexes and condition variables pthread_mutex_t startservice; pthread_mutex_t numclients; pthread_mutex_t pool_sockets; pthread_mutex_t nonpool_sockets; pthread_mutex_t m_pool_busy; pthread_mutex_t slots; pthread_mutex_t numm; pthread_mutex_t circ; pthread_cond_t clientin; pthread_cond_t m; /////////////////////////////////////// //Global variables int clients; int main_pool_busy; int * poolsockets, nonpoolsockets; int worknum; struct ring mqueue; /////////////////////////////////////// int main(int argc, char ** argv){ //error handling if not enough arguments to program if(argc != 3){ printf("Not enough arguments to server: ./server portnum NumThreadsinPool\n"); _exit(-1); } //Convert arguments from strings to integer values int port = atoi(argv[1]); worknum = atoi(argv[2]); //Start server portion server(port); } /////////////////////////////////////////////////////////////////////////////////////////////// //The listen server thread///////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////////// void server(int port){ int sd, new_sd; struct sockaddr_in name, cli_name; int sock_opt_val = 1; int cli_len; pthread_t threads[worknum]; //create our pthread id array pthread_t dis[1]; //create our dispatcher array (necessary to create thread) init_mutex_stuff(threads, dis); //initialize mutexes and stuff //Server setup /////////////////////////////////////////////////////// if ((sd = socket (AF_INET, SOCK_STREAM, 0)) < 0) { perror("(servConn): socket() error"); _exit (-1); } if (setsockopt (sd, SOL_SOCKET, SO_REUSEADDR, (char *) &sock_opt_val, sizeof(sock_opt_val)) < 0) { perror ("(servConn): Failed to set SO_REUSEADDR on INET socket"); _exit (-1); } name.sin_family = AF_INET; name.sin_port = htons (port); name.sin_addr.s_addr = htonl(INADDR_ANY); if (bind (sd, (struct sockaddr *)&name, sizeof(name)) < 0) { perror ("(servConn): bind() error"); _exit (-1); } listen (sd, 5); //End of server Setup ////////////////////////////////////////////////// for (;;) { cli_len = sizeof (cli_name); new_sd = accept (sd, (struct sockaddr *) &cli_name, &cli_len); printf ("Assigning new socket descriptor: %d\n", new_sd); inc_clients(); //New client has come in, increment clients add_to_socket_pool(new_sd); //Add client to the pool of sockets if (new_sd < 0) { perror ("(servConn): accept() error"); _exit (-1); } } pthread_exit(NULL); //Quit } //Adds the new socket to the array designated for pthreads in the pool void add_to_socket_pool(int socket){ pthread_mutex_lock(&m_pool_busy); //Lock so that we can check main_pool_busy int i; //If not all our main pool is busy, then allocate to one of them if(main_pool_busy < worknum){ pthread_mutex_unlock(&m_pool_busy); //unlock busy, we no longer need to hold it pthread_mutex_lock(&pool_sockets); //Lock the socket pool array so that we can edit it without worry for(i = 0; i < worknum; i++){ //Find a poolsocket that is -1; then we should put the real socket there. This value will be changed back to -1 when the thread grabs the sockfd if(poolsockets[i] == -1){ poolsockets[i] = socket; pthread_mutex_unlock(&pool_sockets); //unlock our pool array, we don't need it anymore inc_busy(); //Incrememnt busy (locks the mutex itself) pthread_cond_signal(&clientin); //Signal first thread waiting on a client that a client needs to be serviced break; } } } else{ //Dynamic thread creation goes here pthread_mutex_unlock(&m_pool_busy); } } //Increments the client number. If client number goes over worknum, we must dynamically create new pthreads void inc_clients(){ pthread_mutex_lock(&numclients); clients++; pthread_mutex_unlock(&numclients); } //Increments busy void inc_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy++; pthread_mutex_unlock(&m_pool_busy); } //Initialize all of our mutexes at the beginning and create our pthreads void init_mutex_stuff(pthread_t * threads, pthread_t * dis){ pthread_mutex_init(&startservice, NULL); pthread_mutex_init(&numclients, NULL); pthread_mutex_init(&pool_sockets, NULL); pthread_mutex_init(&nonpool_sockets, NULL); pthread_mutex_init(&m_pool_busy, NULL); pthread_mutex_init(&circ, NULL); pthread_cond_init (&clientin, NULL); main_pool_busy = 0; poolsockets = malloc(sizeof(int)*worknum); int threadreturn; //error checking variables long i = 0; //Loop and create pthreads for(i; i < worknum; i++){ threadreturn = pthread_create(&threads[i], NULL, threadpool, (void *) i); poolsockets[i] = -1; if(threadreturn){ perror("Thread pool created unsuccessfully"); _exit(-1); } } pthread_create(&dis[0], NULL, dispatcher, NULL); } ////////////////////////////////////////////////////////////////////////////////////////// /////////Main pool routines ///////////////////////////////////////////////////////////////////////////////////////// void dec_busy(){ pthread_mutex_lock(&m_pool_busy); main_pool_busy--; pthread_mutex_unlock(&m_pool_busy); } void dec_clients(){ pthread_mutex_lock(&numclients); clients--; pthread_mutex_unlock(&numclients); } //This is what our threadpool pthreads will be running. void *threadpool(void * threadid){ long id = (long) threadid; //Id of this thread int i; int socket; int counter = 0; //Try and gain access to the next client that comes in and wait until server signals that a client as arrived while(1){ pthread_mutex_lock(&startservice); //lock start service (required for cond wait) pthread_cond_wait(&clientin, &startservice); //wait for signal from server that client exists pthread_mutex_unlock(&startservice); //unlock mutex. pthread_mutex_lock(&pool_sockets); //Lock the pool socket so we can get the socket fd unhindered/interrupted for(i = 0; i < worknum; i++){ if(poolsockets[i] != -1){ socket = poolsockets[i]; poolsockets[i] = -1; pthread_mutex_unlock(&pool_sockets); } } printf("Thread #%d is past getting the socket\n", id); int incoming = 1; while(counter < 100 && incoming != 0){ char buffer[512]; bzero(buffer,512); int startcounter = 0; incoming = read(socket, buffer, 512); if(buffer[0] != 0){ //client ID:priority:request:arguments char id[100]; long prior; char request[100]; char arg1[100]; char message[100]; char arg2[100]; char * point; point = strtok(buffer, ":"); strcpy(id, point); point = strtok(NULL, ":"); prior = atoi(point); point = strtok(NULL, ":"); strcpy(request, point); point = strtok(NULL, ":"); strcpy(arg1, point); point = strtok(NULL, ":"); if(point != NULL){ strcpy(arg2, point); } int fd; if(strcmp(request, "start_movie") == 0){ int count = 1; while(count <= 100){ char temp[10]; snprintf(temp, 50, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s to %s\n", message, id); count++; add_to_ring(message, id, prior, counter, socket); //Adds our created message to the ring counter++; } printf("I'm out of the loop\n"); } else if(strcmp(request, "seek_movie") == 0){ int count = atoi(arg2); while(count <= 100){ char temp[10]; snprintf(temp, 10, "%d\0", count); strcpy(message, "./support/images/"); strcat(message, arg1); strcat(message, temp); strcat(message, ".ppm"); printf("This is message %s\n", message); count++; } } //create_message(id, socket, counter, buffer, message); //Creates our message from the input from the client. Stores it in buffer } else{ delete_socket_messages(socket); break; } } counter = 0; close(socket);//Zero out counter again } dec_clients(); //client serviced, decrement clients dec_busy(); //thread finished, decrement busy } //Creates a message void create_message(long threadid, int socket, int counter, char * buffer, char * message){ snprintf(message, strlen(buffer)+15, "%d:%d:%d:%s", threadid, socket, counter, buffer); } //Gets the socket from the message string (maybe I should just pass in the socket to another method) int socket_from_string(char * message){ char * substr1 = strstr(message, ":"); char * substr2 = substr1; substr2++; int occurance = strcspn(substr2, ":"); char sock[10]; strncpy(sock, substr2, occurance); return atoi(sock); } //Adds message to our ring buffer's head void add_to_head(char * message){ printf("Adding to head of ring\n"); mqueue.head->message = malloc(strlen(message)+1); //Allocate space for message strcpy(mqueue.head->message, message); //copy bytes into allocated space } //Adds our message to our ring buffer's tail void add_to_tail(char * message){ printf("Adding to tail of ring\n"); mqueue.tail->message = malloc(strlen(message)+1); //allocate space for message strcpy(mqueue.tail->message, message); //copy bytes into allocated space mqueue.tail->next = malloc(sizeof(struct message)); //allocate space for the next message struct } //Adds a message to our ring void add_to_ring(char * message, char * id, int prior, int mnum, int socket){ //printf("This is message %s:" , message); pthread_mutex_lock(&circ); //Lock the ring buffer pthread_mutex_lock(&numm); //Lock the message count (will need this to make sure we can't fill the buffer over the max slots) if(mqueue.head->message == NULL){ add_to_head(message); //Adds it to head mqueue.head->socket = socket; //Set message socket mqueue.head->priority = prior; //Set its priority (thread id) mqueue.head->mnum = mnum; //Set its message number (used for sorting) mqueue.head->id = malloc(sizeof(id)); strcpy(mqueue.head->id, id); } else if(mqueue.tail->message == NULL){ //This is the problem for dis_m 1 I'm pretty sure add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } else{ mqueue.tail->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.tail->next; add_to_tail(message); mqueue.tail->socket = socket; mqueue.tail->priority = prior; mqueue.tail->mnum = mnum; mqueue.tail->id = malloc(sizeof(id)); strcpy(mqueue.tail->id, id); } mqueue.mcount++; pthread_mutex_unlock(&circ); if(mqueue.mcount >= dis_m){ pthread_mutex_unlock(&numm); pthread_cond_signal(&m); } else{ pthread_mutex_unlock(&numm); } printf("out of add to ring\n"); fflush(stdout); } ////////////////////////////////// //Dispatcher routines ///////////////////////////////// void *dispatcher(){ init_ring(); while(1){ pthread_mutex_lock(&slots); pthread_cond_wait(&m, &slots); pthread_mutex_lock(&numm); pthread_mutex_lock(&circ); printf("Dispatcher to the rescue!\n"); mqueue.head = reorder(mqueue.head, mqueue.tail, mqueue.mcount); //printf("This is the head %s\n", mqueue.head->message); //printf("This is the tail %s\n", mqueue.head->message); fflush(stdout); struct message * pointer = mqueue.head; int count = 0; while(mqueue.head != mqueue.tail && count < dis_m){ printf("Sending to client %s: %s\n", pointer->id, pointer->message); int fd; fd = open(pointer->message, O_RDONLY); char buf[58368]; int bytesRead; printf("This is fd %d\n", fd); bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); fflush(stdout); close(fd); mqueue.mcount--; mqueue.head = mqueue.head->next; free(pointer->message); free(pointer); pointer = mqueue.head; count++; } printf("Sending %s\n", pointer->message); int fd; fd = open(pointer->message, O_RDONLY); printf("This is fd %d\n", fd); printf("I am hhere2\n"); char buf[58368]; int bytesRead; bytesRead=read(fd,buf,58368); send(pointer->socket,buf,bytesRead,0); perror("Error:\n"); close(fd); mqueue.mcount--; if(mqueue.head != mqueue.tail){ mqueue.head = mqueue.head->next; } else{ mqueue.head->next = malloc(sizeof(struct message)); mqueue.head = mqueue.head->next; mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.head->message = NULL; } free(pointer->message); free(pointer); pthread_mutex_unlock(&numm); pthread_mutex_unlock(&circ); pthread_mutex_unlock(&slots); printf("My dispatcher has defeated evil\n"); } } void init_ring(){ mqueue.head = malloc(sizeof(struct message)); mqueue.head->next = malloc(sizeof(struct message)); mqueue.tail = mqueue.head->next; mqueue.mcount = 0; } struct message * reorder(struct message * begin, struct message * end, int num){ //printf("I am reordering for size %d\n", num); fflush(stdout); int i; if(num == 1){ //printf("Begin: %s\n", begin->message); begin->next = NULL; return begin; } else{ struct message * left = begin; struct message * right; int middle = num/2; for(i = 1; i < middle; i++){ left = left->next; } right = left -> next; left -> next = NULL; //printf("Begin: %s\nLeft: %s\nright: %s\nend:%s\n", begin->message, left->message, right->message, end->message); left = reorder(begin, left, middle); if(num%2 != 0){ right = reorder(right, end, middle+1); } else{ right = reorder(right, end, middle); } return merge(left, right, num); } } struct message * merge(struct message * left, struct message * right, int num){ //printf("I am merginging! left: %s %d, right: %s %dnum: %d\n", left->message,left->priority, right->message, right->priority, num); struct message * start, * point; int lenL= 0; int lenR = 0; int flagL = 0; int flagR = 0; int count = 0; int middle1 = num/2; int middle2; if(num%2 != 0){ middle2 = middle1+1; } else{ middle2 = middle1; } while(lenL < middle1 && lenR < middle2){ count++; //printf("In here for count %d\n", count); if(lenL == 0 && lenR == 0){ if(left->priority < right->priority){ start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ start = right; point = right; right = right->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ ////printf("This is where we are\n"); start = left; //Set the start point point = left; //set our enum; left = left->next; //move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ start = right; point = right; right = right->next; point->next = NULL; lenR++; } } } else{ if(left->priority < right->priority){ point->next = left; left = left->next; //move the left pointer point = point->next; point->next = NULL; //Set the next node to NULL lenL++; } else if(left->priority > right->priority){ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } else{ if(left->mnum < right->mnum){ point->next = left; //set our enum; left = left->next; point = point->next;//move the left pointer point->next = NULL; //Set the next node to NULL lenL++; } else{ point->next = right; right = right->next; point = point->next; point->next = NULL; lenR++; } } } if(lenL == middle1){ flagL = 1; break; } if(lenR == middle2){ flagR = 1; break; } } if(flagL == 1){ point->next = right; point = point->next; for(lenR; lenR< middle2-1; lenR++){ point = point->next; } point->next = NULL; mqueue.tail = point; } else{ point->next = left; point = point->next; for(lenL; lenL< middle1-1; lenL++){ point = point->next; } point->next = NULL; mqueue.tail = point; } //printf("This is the start %s\n", start->message); //printf("This is mqueue.tail %s\n", mqueue.tail->message); return start; } void delete_socket_messages(int a){ }

    Read the article

  • MySQL is running VERY slow on CentOS 6x (not 5x)

    - by user1032531
    I have two servers: a VPS and a laptop. I recently re-built both of them, and MySQL is running about 20 times slower on the laptop. Both servers used to run CentOS 5.8 and I think MySQL 5.1, and the laptop used to do great so I do not think it is the hardware. For the VPS, my provider installed CentOS 6.4, and then I installed MySQL 5.1.69 using yum with the CentOS repo. For the laptop, I installed CentOS 6.4 basic server and then installed MySQL 5.1.69 using yum with the CentOS repo. my.cnf for both servers are identical, and I have shown below. For both servers, I've also included below the output from SHOW VARIABLES; as well as output from sysbench, file system information, and cpu information. I have tried adding skip-name-resolve, but it didn't help. The matrix below shows the SHOW VARIABLES output from both servers which is different. Again, MySQL was installed the same way, so I do not know why it is different, but it is and I think this might be why the laptop is executing MySQL so slowly. Why is the laptop running MySQL slowly, and how do I fix it? Differences between SHOW VARIABLES on both servers +---------------------------+-----------------------+-------------------------+ | Variable | Value-VPS | Value-Laptop | +---------------------------+-----------------------+-------------------------+ | hostname | vps.site1.com | laptop.site2.com | | max_binlog_cache_size | 4294963200 | 18446744073709500000 | | max_seeks_for_key | 4294967295 | 18446744073709500000 | | max_write_lock_count | 4294967295 | 18446744073709500000 | | myisam_max_sort_file_size | 2146435072 | 9223372036853720000 | | myisam_mmap_size | 4294967295 | 18446744073709500000 | | plugin_dir | /usr/lib/mysql/plugin | /usr/lib64/mysql/plugin | | pseudo_thread_id | 7568 | 2 | | system_time_zone | EST | PDT | | thread_stack | 196608 | 262144 | | timestamp | 1372252112 | 1372252046 | | version_compile_machine | i386 | x86_64 | +---------------------------+-----------------------+-------------------------+ my.cnf for both servers [root@server1 ~]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid innodb_strict_mode=on sql_mode=TRADITIONAL # sql_mode=STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE character-set-server=utf8 collation-server=utf8_general_ci log=/var/log/mysqld_all.log [root@server1 ~]# VPS SHOW VARIABLES Info Same as Laptop shown below but changes per above matrix (removed to allow me to be under the 30000 characters as required by ServerFault) Laptop SHOW VARIABLES Info auto_increment_increment 1 auto_increment_offset 1 autocommit ON automatic_sp_privileges ON back_log 50 basedir /usr/ big_tables OFF binlog_cache_size 32768 binlog_direct_non_transactional_updates OFF binlog_format STATEMENT bulk_insert_buffer_size 8388608 character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /usr/share/mysql/charsets/ collation_connection utf8_general_ci collation_database latin1_swedish_ci collation_server latin1_swedish_ci completion_type 0 concurrent_insert 1 connect_timeout 10 datadir /var/lib/mysql/ date_format %Y-%m-%d datetime_format %Y-%m-%d %H:%i:%s default_week_format 0 delay_key_write ON delayed_insert_limit 100 delayed_insert_timeout 300 delayed_queue_size 1000 div_precision_increment 4 engine_condition_pushdown ON error_count 0 event_scheduler OFF expire_logs_days 0 flush OFF flush_time 0 foreign_key_checks ON ft_boolean_syntax + -><()~*:""&| ft_max_word_len 84 ft_min_word_len 4 ft_query_expansion_limit 20 ft_stopword_file (built-in) general_log OFF general_log_file /var/run/mysqld/mysqld.log group_concat_max_len 1024 have_community_features YES have_compress YES have_crypt YES have_csv YES have_dynamic_loading YES have_geometry YES have_innodb YES have_ndbcluster NO have_openssl DISABLED have_partitioning YES have_query_cache YES have_rtree_keys YES have_ssl DISABLED have_symlink DISABLED hostname server1.site2.com identity 0 ignore_builtin_innodb OFF init_connect init_file init_slave innodb_adaptive_hash_index ON innodb_additional_mem_pool_size 1048576 innodb_autoextend_increment 8 innodb_autoinc_lock_mode 1 innodb_buffer_pool_size 8388608 innodb_checksums ON innodb_commit_concurrency 0 innodb_concurrency_tickets 500 innodb_data_file_path ibdata1:10M:autoextend innodb_data_home_dir innodb_doublewrite ON innodb_fast_shutdown 1 innodb_file_io_threads 4 innodb_file_per_table OFF innodb_flush_log_at_trx_commit 1 innodb_flush_method innodb_force_recovery 0 innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 1048576 innodb_log_file_size 5242880 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_max_dirty_pages_pct 90 innodb_max_purge_lag 0 innodb_mirrored_log_groups 1 innodb_open_files 300 innodb_rollback_on_timeout OFF innodb_stats_method nulls_equal innodb_stats_on_metadata ON innodb_support_xa ON innodb_sync_spin_loops 20 innodb_table_locks ON innodb_thread_concurrency 8 innodb_thread_sleep_delay 10000 innodb_use_legacy_cardinality_algorithm ON insert_id 0 interactive_timeout 28800 join_buffer_size 131072 keep_files_on_create OFF key_buffer_size 8384512 key_cache_age_threshold 300 key_cache_block_size 1024 key_cache_division_limit 100 language /usr/share/mysql/english/ large_files_support ON large_page_size 0 large_pages OFF last_insert_id 0 lc_time_names en_US license GPL local_infile ON locked_in_memory OFF log OFF log_bin OFF log_bin_trust_function_creators OFF log_bin_trust_routine_creators OFF log_error /var/log/mysqld.log log_output FILE log_queries_not_using_indexes OFF log_slave_updates OFF log_slow_queries OFF log_warnings 1 long_query_time 10.000000 low_priority_updates OFF lower_case_file_system OFF lower_case_table_names 0 max_allowed_packet 1048576 max_binlog_cache_size 18446744073709547520 max_binlog_size 1073741824 max_connect_errors 10 max_connections 151 max_delayed_threads 20 max_error_count 64 max_heap_table_size 16777216 max_insert_delayed_threads 20 max_join_size 18446744073709551615 max_length_for_sort_data 1024 max_long_data_size 1048576 max_prepared_stmt_count 16382 max_relay_log_size 0 max_seeks_for_key 18446744073709551615 max_sort_length 1024 max_sp_recursion_depth 0 max_tmp_tables 32 max_user_connections 0 max_write_lock_count 18446744073709551615 min_examined_row_limit 0 multi_range_count 256 myisam_data_pointer_size 6 myisam_max_sort_file_size 9223372036853727232 myisam_mmap_size 18446744073709551615 myisam_recover_options OFF myisam_repair_threads 1 myisam_sort_buffer_size 8388608 myisam_stats_method nulls_unequal myisam_use_mmap OFF net_buffer_length 16384 net_read_timeout 30 net_retry_count 10 net_write_timeout 60 new OFF old OFF old_alter_table OFF old_passwords OFF open_files_limit 1024 optimizer_prune_level 1 optimizer_search_depth 62 optimizer_switch index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on pid_file /var/run/mysqld/mysqld.pid plugin_dir /usr/lib64/mysql/plugin port 3306 preload_buffer_size 32768 profiling OFF profiling_history_size 15 protocol_version 10 pseudo_thread_id 3 query_alloc_block_size 8192 query_cache_limit 1048576 query_cache_min_res_unit 4096 query_cache_size 0 query_cache_type ON query_cache_wlock_invalidate OFF query_prealloc_size 8192 rand_seed1 rand_seed2 range_alloc_block_size 4096 read_buffer_size 131072 read_only OFF read_rnd_buffer_size 262144 relay_log relay_log_index relay_log_info_file relay-log.info relay_log_purge ON relay_log_space_limit 0 report_host report_password report_port 3306 report_user rpl_recovery_rank 0 secure_auth OFF secure_file_priv server_id 0 skip_external_locking ON skip_name_resolve OFF skip_networking OFF skip_show_database OFF slave_compressed_protocol OFF slave_exec_mode STRICT slave_load_tmpdir /tmp slave_max_allowed_packet 1073741824 slave_net_timeout 3600 slave_skip_errors OFF slave_transaction_retries 10 slow_launch_time 2 slow_query_log OFF slow_query_log_file /var/run/mysqld/mysqld-slow.log socket /var/lib/mysql/mysql.sock sort_buffer_size 2097144 sql_auto_is_null ON sql_big_selects ON sql_big_tables OFF sql_buffer_result OFF sql_log_bin ON sql_log_off OFF sql_log_update ON sql_low_priority_updates OFF sql_max_join_size 18446744073709551615 sql_mode sql_notes ON sql_quote_show_create ON sql_safe_updates OFF sql_select_limit 18446744073709551615 sql_slave_skip_counter sql_warnings OFF ssl_ca ssl_capath ssl_cert ssl_cipher ssl_key storage_engine MyISAM sync_binlog 0 sync_frm ON system_time_zone PDT table_definition_cache 256 table_lock_wait_timeout 50 table_open_cache 64 table_type MyISAM thread_cache_size 0 thread_handling one-thread-per-connection thread_stack 262144 time_format %H:%i:%s time_zone SYSTEM timed_mutexes OFF timestamp 1372254399 tmp_table_size 16777216 tmpdir /tmp transaction_alloc_block_size 8192 transaction_prealloc_size 4096 tx_isolation REPEATABLE-READ unique_checks ON updatable_views_with_limit YES version 5.1.69 version_comment Source distribution version_compile_machine x86_64 version_compile_os redhat-linux-gnu wait_timeout 28800 warning_count 0 VPS Sysbench Info Deleted to stay under 30000 characters. Laptop Sysbench Info [root@server1 ~]# cat sysbench.txt sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 8 Doing OLTP test. Running mixed OLTP test Doing read-only test Using Special distribution (12 iterations, 1 pct of values are returned in 75 pct cases) Using "BEGIN" for starting transactions Using auto_inc on the id column Threads started! Time limit exceeded, exiting... (last message repeated 7 times) Done. OLTP test statistics: queries performed: read: 634718 write: 0 other: 90674 total: 725392 transactions: 45337 (755.56 per sec.) deadlocks: 0 (0.00 per sec.) read/write requests: 634718 (10577.78 per sec.) other operations: 90674 (1511.11 per sec.) Test execution summary: total time: 60.0048s total number of events: 45337 total time taken by event execution: 479.4912 per-request statistics: min: 2.04ms avg: 10.58ms max: 85.56ms approx. 95 percentile: 19.70ms Threads fairness: events (avg/stddev): 5667.1250/42.18 execution time (avg/stddev): 59.9364/0.00 [root@server1 ~]# VPS File Info [root@vps ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/simfs simfs 20971520 16187440 4784080 78% / none tmpfs 6224432 4 6224428 1% /dev none tmpfs 6224432 0 6224432 0% /dev/shm [root@vps ~]# Laptop File Info [root@server1 ~]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_server1-lv_root ext4 72383800 4243964 64462860 7% / tmpfs tmpfs 956352 0 956352 0% /dev/shm /dev/sdb1 ext4 495844 60948 409296 13% /boot [root@server1 ~]# VPS CPU Info Removed to stay under the 30000 character limit required by ServerFault Laptop CPU Info [root@server1 ~]# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel(R) Core(TM)2 Duo CPU T7100 @ 1.80GHz stepping : 13 cpu MHz : 800.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm ida dts tpr_shadow vnmi flexpriority bogomips : 3591.39 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: [root@server1 ~]# EDIT New Info requested by shakalandy [root@localhost ~]# cat /proc/meminfo MemTotal: 2044804 kB MemFree: 761464 kB Buffers: 68868 kB Cached: 369708 kB SwapCached: 0 kB Active: 881080 kB Inactive: 246016 kB Active(anon): 688312 kB Inactive(anon): 4416 kB Active(file): 192768 kB Inactive(file): 241600 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 4095992 kB SwapFree: 4095992 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 688428 kB Mapped: 65156 kB Shmem: 4216 kB Slab: 92428 kB SReclaimable: 31260 kB SUnreclaim: 61168 kB KernelStack: 2392 kB PageTables: 28356 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 5118392 kB Committed_AS: 1530212 kB VmallocTotal: 34359738367 kB VmallocUsed: 343604 kB VmallocChunk: 34359372920 kB HardwareCorrupted: 0 kB AnonHugePages: 520192 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 8556 kB DirectMap2M: 2078720 kB [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501360 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3036 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14449 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# ps aux | grep mysql root 2227 0.0 0.0 108332 1504 ? S 07:36 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/localhost.badobe.com.pid mysql 2319 0.1 24.5 1470068 501356 ? Sl 07:36 0:57 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/localhost.badobe.com.err --pid-file=/var/lib/mysql/localhost.badobe.com.pid root 3579 0.0 0.1 201840 3028 pts/0 S+ 07:40 0:00 mysql -u root -p root 13887 0.0 0.1 201840 3048 pts/3 S+ 18:08 0:00 mysql -uroot -px xxxxxxxxxx root 14470 0.0 0.0 103248 840 pts/2 S+ 18:16 0:00 grep mysql [root@localhost ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 742172 76376 371064 0 0 6 6 78 202 2 1 97 1 0 0 0 0 742164 76380 371060 0 0 0 16 191 467 2 1 93 5 0 0 0 0 742164 76380 371064 0 0 0 0 148 388 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 418 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 145 380 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 166 429 2 1 97 0 0 1 0 0 742164 76380 371064 0 0 0 0 148 373 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 149 382 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 168 408 2 0 97 0 0 0 0 0 742164 76380 371064 0 0 0 0 165 394 2 1 98 0 0 0 0 0 742164 76380 371064 0 0 0 0 159 354 2 1 98 0 0 0 0 0 742164 76388 371060 0 0 0 16 180 447 2 0 91 6 0 0 0 0 742164 76388 371064 0 0 0 0 143 344 2 1 98 0 0 0 1 0 742784 76416 370044 0 0 28 580 360 678 3 1 74 23 0 1 0 0 744768 76496 367772 0 0 40 1036 437 865 3 1 53 43 0 0 1 0 747248 76596 365412 0 0 48 1224 561 923 3 2 53 43 0 0 1 0 749232 76696 363092 0 0 32 1132 512 883 3 2 52 44 0 0 1 0 751340 76772 361020 0 0 32 1008 472 872 2 1 52 45 0 0 1 0 753448 76840 358540 0 0 36 1088 512 860 2 1 51 46 0 0 1 0 755060 76936 357636 0 0 28 1012 481 922 2 2 52 45 0 0 1 0 755060 77064 357988 0 0 12 896 444 902 2 1 53 45 0 0 1 0 754688 77148 358448 0 0 16 1096 506 1007 1 1 56 42 0 0 2 0 754192 77268 358932 0 0 12 1060 481 957 1 2 53 44 0 0 1 0 753696 77380 359392 0 0 12 1052 512 1025 2 1 55 42 0 0 1 0 751028 77480 359828 0 0 8 984 423 909 2 2 52 45 0 0 1 0 750524 77620 360200 0 0 8 788 367 869 1 2 54 44 0 0 1 0 749904 77700 360664 0 0 8 928 439 924 2 2 55 43 0 0 1 0 749408 77796 361084 0 0 12 976 468 967 1 1 56 43 0 0 1 0 748788 77896 361464 0 0 12 992 453 944 1 2 54 43 0 1 1 0 748416 77992 361996 0 0 12 784 392 868 2 1 52 46 0 0 1 0 747920 78092 362336 0 0 4 896 382 874 1 1 52 46 0 0 1 0 745252 78172 362780 0 0 12 1040 444 923 1 1 56 42 0 0 1 0 744764 78288 363220 0 0 8 1024 448 934 2 1 55 43 0 0 1 0 744144 78408 363668 0 0 8 1000 461 982 2 1 53 44 0 0 1 0 743648 78488 364148 0 0 8 872 443 888 2 1 54 43 0 0 1 0 743152 78548 364468 0 0 16 1020 511 995 2 1 55 43 0 0 1 0 742656 78632 365024 0 0 12 928 431 913 1 2 53 44 0 0 1 0 742160 78728 365468 0 0 12 996 470 955 2 2 54 44 0 1 1 0 739492 78840 365896 0 0 8 988 447 939 1 2 52 46 0 0 1 0 738872 78996 366352 0 0 12 972 442 928 1 1 55 44 0 1 1 0 738244 79148 366812 0 0 8 948 549 1126 2 2 54 43 0 0 1 0 737624 79312 367188 0 0 12 996 456 953 2 2 54 43 0 0 1 0 736880 79456 367660 0 0 12 960 444 918 1 1 53 46 0 0 1 0 736260 79584 368124 0 0 8 884 414 921 1 1 54 44 0 0 1 0 735648 79716 368488 0 0 12 976 450 955 2 1 56 41 0 0 1 0 733104 79840 368988 0 0 12 932 453 918 1 2 55 43 0 0 1 0 732608 79996 369356 0 0 16 916 444 889 1 2 54 43 0 1 1 0 731476 80128 369800 0 0 16 852 514 978 2 2 54 43 0 0 1 0 731244 80252 370200 0 0 8 904 398 870 2 1 55 43 0 1 1 0 730624 80384 370612 0 0 12 1032 447 977 1 2 57 41 0 0 1 0 730004 80524 371096 0 0 12 984 469 941 2 2 52 45 0 0 1 0 729508 80636 371544 0 0 12 928 438 922 2 1 52 46 0 0 1 0 728888 80756 371948 0 0 16 972 439 943 2 1 55 43 0 0 1 0 726468 80900 372272 0 0 8 960 545 1024 2 1 54 43 0 1 1 0 726344 81024 372272 0 0 8 464 490 1057 1 2 53 44 0 0 1 0 726096 81148 372276 0 0 4 328 441 1063 2 1 53 45 0 1 1 0 726096 81256 372292 0 0 0 296 387 975 1 1 53 45 0 0 1 0 725848 81380 372284 0 0 4 332 425 1034 2 1 54 44 0 1 1 0 725848 81496 372300 0 0 4 308 386 992 2 1 54 43 0 0 1 0 725600 81616 372296 0 0 4 328 404 1060 1 1 54 44 0 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 1 0 725600 81732 372296 0 0 4 328 439 1011 1 1 53 44 0 0 1 0 725476 81848 372308 0 0 0 316 441 1023 2 2 52 46 0 1 1 0 725352 81972 372300 0 0 4 344 451 1021 1 1 55 43 0 2 1 0 725228 82088 372320 0 0 0 328 427 1058 1 1 54 44 0 1 1 0 724980 82220 372300 0 0 4 336 419 999 2 1 54 44 0 1 1 0 724980 82328 372320 0 0 4 320 430 1019 1 1 54 44 0 1 1 0 724732 82436 372328 0 0 0 388 363 942 2 1 54 44 0 1 1 0 724608 82560 372312 0 0 4 308 419 993 1 2 54 44 0 1 0 0 724360 82684 372320 0 0 0 304 421 1028 2 1 55 42 0 1 0 0 724360 82684 372388 0 0 0 0 158 416 2 1 98 0 0 1 1 0 724236 82720 372360 0 0 0 6464 243 855 3 2 84 12 0 1 0 0 724112 82748 372360 0 0 0 5356 266 895 3 1 84 12 0 2 1 0 724112 82764 372380 0 0 0 3052 221 511 2 2 93 4 0 1 0 0 724112 82796 372372 0 0 0 4548 325 1067 2 2 81 16 0 1 0 0 724112 82816 372368 0 0 0 3240 259 829 3 1 90 6 0 1 0 0 724112 82836 372380 0 0 0 3260 309 822 3 2 88 8 0 1 1 0 724112 82876 372364 0 0 0 4680 326 978 3 1 77 19 0 1 0 0 724112 82884 372380 0 0 0 512 207 508 2 1 95 2 0 1 0 0 724112 82884 372388 0 0 0 0 138 361 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 158 397 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 146 395 2 1 98 0 0 2 0 0 724112 82884 372388 0 0 0 0 160 395 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 163 382 1 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 176 422 2 1 98 0 0 1 0 0 724112 82884 372388 0 0 0 0 134 351 2 1 98 0 0 0 0 0 724112 82884 372388 0 0 0 0 190 429 2 1 97 0 0 0 0 0 724104 82884 372392 0 0 0 0 139 358 2 1 98 0 0 0 0 0 724848 82884 372392 0 0 0 4 211 432 2 1 97 0 0 1 0 0 724980 82884 372392 0 0 0 0 166 370 2 1 98 0 0 0 0 0 724980 82884 372392 0 0 0 0 164 397 2 1 98 0 0 ^C [root@localhost ~]# Database size mysql> SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB", sum( data_free )/ 1024 / 1024 "Free Space in MB" FROM information_schema.TABLES GROUP BY table_schema; +--------------------+----------------------+------------------+ | Data Base Name | Data Base Size in MB | Free Space in MB | +--------------------+----------------------+------------------+ | bidjunction | 4.68750000 | 0.00000000 | | information_schema | 0.00976563 | 0.00000000 | | mysql | 0.63899899 | 0.00105286 | +--------------------+----------------------+------------------+ 3 rows in set (0.01 sec) mysql> Before Query mysql> SHOW SESSION STATUS like '%Tmp%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_files | 6 | | Created_tmp_tables | 0 | +-------------------------+-------+ 3 rows in set (0.00 sec) mysql> After Query mysql> SHOW SESSION STATUS like '%Tmp%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_files | 6 | | Created_tmp_tables | 2 | +-------------------------+-------+ 3 rows in set (0.00 sec) mysql>

    Read the article

  • Delphi Editbox causing unexplainable errors...

    - by NeoNMD
    On a form I have 8 edit boxes that I do the exact same thing with. They are arranged in 2 sets of 4, one is Upper and the other is Lower. I kept getting errors clearing all the edit boxes so I went through clearing them 1 by 1 and found that 1 of the edit boxes just didnt work and when I tried to run the program and change that exit box it caused the debugger to jump to a point in the code with the database (even though the edit boxes had nothing to do with the database and they arent in a procedure or stack with a database in it) and say the program has access violation there. So I then removed all mention of that edit box and the code worked perfectly again, so I deleted that edit box, copied and pasted another edit box and left all values the same, then went through my code and copied the code from the other sections and simply renamed it for the new Edit box and it STILL causes an error even though it is entirely new. I cannot figure it out so I ask you all, what the hell? The editbox in question is "Edit1" unit DefinitionCoreV2; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, SQLiteTable3, StdCtrls; type TDefinitionFrm = class(TForm) GrpCompetition: TGroupBox; CmbCompSele: TComboBox; BtnCompetitionAdd: TButton; BtnCompetitionUpdate: TButton; BtnCompetitionRevert: TButton; GrpCompetitionDetails: TGroupBox; LblCompetitionIDTitle: TLabel; EdtCompID: TEdit; LblCompetitionDescriptionTitle: TLabel; EdtCompDesc: TEdit; LblCompetitionNotesTitle: TLabel; EdtCompNote: TEdit; LblCompetitionLocationTitle: TLabel; EdtCompLoca: TEdit; BtnCompetitionDelete: TButton; GrpSection: TGroupBox; LblSectionID: TLabel; LblGender: TLabel; LblAge: TLabel; LblLevel: TLabel; LblWeight: TLabel; LblType: TLabel; LblHeight: TLabel; LblCompetitionID: TLabel; BtnSectionAdd: TButton; EdtSectionID: TEdit; CmbGender: TComboBox; BtnSectionUpdate: TButton; BtnSectionRevert: TButton; CmbAgeRange: TComboBox; CmbLevelRange: TComboBox; CmbType: TComboBox; CmbWeightRange: TComboBox; CmbHeightRange: TComboBox; EdtSectCompetitionID: TEdit; BtnSectionDelete: TButton; GrpSectionDetails: TGroupBox; EdtLowerAge: TEdit; EdtLowerWeight: TEdit; EdtLowerHeight: TEdit; EdtUpperAge: TEdit; EdtUpperLevel: TEdit; EdtUpperWeight: TEdit; EdtUpperHeight: TEdit; LblAgeRule: TLabel; LblLevelRule: TLabel; LblWeightRule: TLabel; LblHeightRule: TLabel; LblCompetitionSelect: TLabel; LblSectionSelect: TLabel; CmbSectSele: TComboBox; Edit1: TEdit; procedure FormCreate(Sender: TObject); procedure BtnCompetitionAddClick(Sender: TObject); procedure CmbCompSeleChange(Sender: TObject); procedure BtnCompetitionUpdateClick(Sender: TObject); procedure BtnCompetitionRevertClick(Sender: TObject); procedure BtnCompetitionDeleteClick(Sender: TObject); procedure CmbSectSeleChange(Sender: TObject); procedure BtnSectionAddClick(Sender: TObject); procedure BtnSectionUpdateClick(Sender: TObject); procedure BtnSectionRevertClick(Sender: TObject); procedure BtnSectionDeleteClick(Sender: TObject); procedure CmbAgeRangeChange(Sender: TObject); procedure CmbLevelRangeChange(Sender: TObject); procedure CmbWeightRangeChange(Sender: TObject); procedure CmbHeightRangeChange(Sender: TObject); private procedure UpdateCmbCompSele; procedure AddComp; procedure RevertComp; procedure AddSect; procedure RevertSect; procedure UpdateCmbSectSele; procedure ClearSect; { Private declarations } public { Public declarations } end; var DefinitionFrm: TDefinitionFrm; implementation {$R *.dfm} procedure TDefinitionFrm.UpdateCmbCompSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sCompTitle : string; bNext : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try CmbCompSele.Items.Clear; Repeat begin sCompTitle:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Description']); CmbCompSele.Items.Add(sCompTitle); bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.UpdateCmbSectSele; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; sSQL : string; sSectTitle : string; bNext : boolean; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); If sltb.RowCount =0 then begin sltb := slDb.GetTable('SELECT * FROM SectionTable'); bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID = '+EdtCompID.text); end; try CmbSectSele.Items.Clear; Repeat begin sSectTitle:=sltb.FieldAsString(sltb.FieldIndex['SectionID'])+':'+sltb.FieldAsString(sltb.FieldIndex['Type'])+':'+sltb.FieldAsString(sltb.FieldIndex['Gender'])+':'+sltb.FieldAsString(sltb.FieldIndex['Age'])+':'+sltb.FieldAsString(sltb.FieldIndex['Level'])+':'+sltb.FieldAsString(sltb.FieldIndex['Weight'])+':'+sltb.FieldAsString(sltb.FieldIndex['Height']); CmbSectSele.Items.Add(sSectTitle); //CmbType.Items.Strings[sltb.FieldAsInteger(sltb.FieldIndex['Type'])] Works but has logic errors bNext := sltb.Next; end; Until sltb.EOF; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.AddComp; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM CompetitionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO CompetitionTable(CompetitionID,Description) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['CompetitionID'])+1)+',"New Competition")'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbCompSele; end; procedure TDefinitionFrm.AddSect; var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; bLast : boolean; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sltb := slDb.GetTable('SELECT * FROM SectionTable'); try bLast:= sltb.MoveLast; sSQL := 'INSERT INTO SectionTable(SectionID,CompetitionID,Gender,Type) VALUES ('+IntToStr(sltb.FieldAsInteger(sltb.FieldIndex['SectionID'])+1)+','+EdtCompID.text+',1,1)'; sldb.ExecSQL(sSQL); finally sltb.Free; end; finally sldb.Free; end; UpdateCmbSectSele; end; procedure TDefinitionFrm.RevertComp; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID))//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM CompetitionTable WHERE CompetitionID=1'); try EdtCompID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); EdtCompLoca.Text:=sltb.FieldAsString(sltb.FieldIndex['Location']); EdtCompDesc.Text:=sltb.FieldAsString(sltb.FieldIndex['Description']); EdtCompNote.Text:=sltb.FieldAsString(sltb.FieldIndex['Notes']); finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.RevertSect; var slDBpath: string; sldb : TSQLiteDatabase; sltb : TSQLiteTable; iID : integer; sTemp : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbCompSele.Text <> '' then begin iID := StrToInt(Copy(CmbSectSele.Text,0,Pos(':',CmbSectSele.Text)-1)); sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE SectionID='+IntToStr(iID));//ItemIndex starts at 0, CompID at 1 end else sltb := slDb.GetTable('SELECT * FROM SectionTable WHERE CompetitionID='+EdtCompID.Text); try EdtSectionID.Text:=sltb.FieldAsString(sltb.FieldIndex['SectionID']); EdtSectCompetitionID.Text:=sltb.FieldAsString(sltb.FieldIndex['CompetitionID']); Case sltb.FieldAsInteger(sltb.FieldIndex['Type']) of 1 : CmbType.ItemIndex:=0; 2 : CmbType.ItemIndex:=0; 3 : CmbType.ItemIndex:=1; 4 : CmbType.ItemIndex:=1; end; Case sltb.FieldAsInteger(sltb.FieldIndex['Gender']) of 1 : CmbGender.ItemIndex:=0; 2 : CmbGender.ItemIndex:=1; 3 : CmbGender.ItemIndex:=2; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Age']); if sTemp <> '' then begin //Decode end else begin LblAgeRule.Hide; EdtLowerAge.Text :=''; EdtLowerAge.Hide; EdtUpperAge.Text :=''; EdtUpperAge.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Level']); if sTemp <> '' then begin //Decode end else begin LblLevelRule.Hide; Edit1.Text :=''; Edit1.Hide; EdtUpperLevel.Text :=''; EdtUpperLevel.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Weight']); if sTemp <> '' then begin //Decode end else begin LblWeightRule.Hide; EdtLowerWeight.Text :=''; EdtLowerWeight.Hide; EdtUpperWeight.Text :=''; EdtUpperWeight.Hide; end; sTemp := sltb.FieldAsString(sltb.FieldIndex['Height']); if sTemp <> '' then begin //Decode end else begin LblHeightRule.Hide; EdtLowerHeight.Text :=''; EdtLowerHeight.Hide; EdtUpperHeight.Text :=''; EdtUpperHeight.Hide; end; finally sltb.Free; end; finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionAddClick(Sender: TObject); begin AddComp end; procedure TDefinitionFrm.ClearSect; begin CmbSectSele.Clear; EdtSectionID.Text:=''; EdtSectCompetitionID.Text:=''; CmbType.Clear; CmbGender.Clear; CmbAgeRange.Clear; EdtLowerAge.Text:=''; EdtUpperAge.Text:=''; CmbLevelRange.Clear; Edit1.Text:=''; EdtUpperLevel.Text:=''; CmbWeightRange.Clear; EdtLowerWeight.Text:=''; EdtUpperWeight.Text:=''; CmbHeightRange.Clear; EdtLowerHeight.Text:=''; EdtUpperHeight.Text:=''; end; procedure TDefinitionFrm.CmbCompSeleChange(Sender: TObject); begin If CmbCompSele.ItemIndex <> -1 then begin RevertComp; GrpSection.Enabled:=True; CmbSectSele.Clear; ClearSect; UpdateCmbSectSele; end; end; procedure TDefinitionFrm.BtnCompetitionUpdateClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'UPDATE CompetitionTable SET Description="'+EdtCompDesc.Text+'",Location="'+EdtCompLoca.Text+'",Notes="'+EdtCompNote.Text+'" WHERE CompetitionID ="'+EdtCompID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnCompetitionRevertClick(Sender: TObject); begin RevertComp; end; procedure TDefinitionFrm.BtnCompetitionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iID : integer; begin If CmbCompSele.Text <> '' then begin If (CmbCompSele.Text[1] ='1') and (CmbCompSele.Text[2] =':') then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try iID := StrToInt(Copy(CmbCompSele.Text,0,Pos(':',CmbCompSele.Text)-1)); sSQL:= 'DELETE FROM SectionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); sSQL:= 'DELETE FROM CompetitionTable WHERE CompetitionID='+IntToStr(iID)+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbCompSele.ItemIndex:=0; UpdateCmbCompSele; RevertComp; CmbCompSele.Text:='Select Competition'; end; end; end; procedure TDefinitionFrm.FormCreate(Sender: TObject); begin UpdateCmbCompSele; end; procedure TDefinitionFrm.CmbSectSeleChange(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionAddClick(Sender: TObject); begin AddSect; end; procedure TDefinitionFrm.BtnSectionUpdateClick(Sender: TObject); //change fields values var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; iTypeCode : integer; iGenderCode : integer; sAgeStr, sLevelStr, sWeightStr, sHeightStr : string; begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try If CmbType.Text='Fighting' then iTypeCode := 1 else iTypeCode := 3; If CmbGender.Text='Male' then iGenderCode := 1 else if CmbGender.Text='Female' then iGenderCode := 2 else iGenderCode := 3; Case CmbAgeRange.ItemIndex of 0:sAgeStr := 'o-'+EdtLowerAge.Text; 1:sAgeStr := 'u-'+EdtLowerAge.Text; 2:sAgeStr := EdtLowerAge.Text+'-'+EdtUpperAge.Text; end; Case CmbLevelRange.ItemIndex of 0:sLevelStr := 'o-'+Edit1.Text; 1:sLevelStr := 'u-'+Edit1.Text; 2:sLevelStr := Edit1.Text+'-'+EdtUpperLevel.Text; end; Case CmbWeightRange.ItemIndex of 0:sWeightStr := 'o-'+EdtLowerWeight.Text; 1:sWeightStr := 'u-'+EdtLowerWeight.Text; 2:sWeightStr := EdtLowerWeight.Text+'-'+EdtUpperWeight.Text; end; Case CmbHeightRange.ItemIndex of 0:sHeightStr := 'o-'+EdtLowerHeight.Text; 1:sHeightStr := 'u-'+EdtLowerHeight.Text; 2:sHeightStr := EdtLowerHeight.Text+'-'+EdtUpperHeight.Text; end; sSQL:= 'UPDATE SectionTable SET Type="'+IntToStr(iTypeCode)+'",Gender="'+IntToStr(iGenderCode)+'" WHERE SectionID ="'+EdtSectionID.Text+'";'; sldb.ExecSQL(sSQL); finally sldb.Free; end; end; procedure TDefinitionFrm.BtnSectionRevertClick(Sender: TObject); begin RevertSect; end; procedure TDefinitionFrm.BtnSectionDeleteClick(Sender: TObject); var slDBpath: string; sSQL : string; sldb : TSQLiteDatabase; begin If CmbSectSele.Text[1] ='1' then begin MessageDlg('Deleting the last record is a very bad idea :/',mtInformation,[mbOK],0); end else begin slDBPath := ExtractFilepath(application.exename)+ 'Competitions.db'; if not FileExists(slDBPath) then begin MessageDlg('Competitions.db does not exist.',mtInformation,[mbOK],0); exit; end; sldb := TSQLiteDatabase.Create(slDBPath); try sSQL:= 'DELETE FROM SectionTable WHERE SectionID='+CmbSectSele.Text[1]+';'; sldb.ExecSQL(sSQL); finally sldb.Free; end; CmbSectSele.ItemIndex:=0; UpdateCmbSectSele; RevertSect; CmbSectSele.Text:='Select Competition'; end; end; procedure TDefinitionFrm.CmbAgeRangeChange(Sender: TObject); begin Case CmbAgeRange.ItemIndex of 0: begin EdtLowerAge.Show; LblAgeRule.Caption:='Over and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 1: begin EdtLowerAge.Show; LblAgeRule.Caption:='Under and including'; LblAgeRule.Show; EdtUpperAge.Hide; end; 2: begin EdtLowerAge.Show; LblAgeRule.Caption:='LblAgeRule'; LblAgeRule.Hide; EdtUpperAge.Show; end; end; end; procedure TDefinitionFrm.CmbLevelRangeChange(Sender: TObject); begin Case CmbLevelRange.ItemIndex of 0: begin Edit1.Show; LblLevelRule.Caption:='Over and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 1: begin Edit1.Show; LblLevelRule.Caption:='Under and including'; LblLevelRule.Show; EdtUpperLevel.Hide; end; 2: begin Edit1.Show; LblLevelRule.Caption:='LblLevelRule'; LblLevelRule.Hide; EdtUpperLevel.Show; end; end; end; procedure TDefinitionFrm.CmbWeightRangeChange(Sender: TObject); begin Case CmbWeightRange.ItemIndex of 0: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Over and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 1: begin EdtLowerWeight.Show; LblWeightRule.Caption:='Under and including'; LblWeightRule.Show; EdtUpperWeight.Hide; end; 2: begin EdtLowerWeight.Show; LblWeightRule.Caption:='LblWeightRule'; LblWeightRule.Hide; EdtUpperWeight.Show; end; end; end; procedure TDefinitionFrm.CmbHeightRangeChange(Sender: TObject); begin Case CmbHeightRange.ItemIndex of 0: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Over and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 1: begin EdtLowerHeight.Show; LblHeightRule.Caption:='Under and including'; LblHeightRule.Show; EdtUpperHeight.Hide; end; 2: begin EdtLowerHeight.Show; LblHeightRule.Caption:='LblHeightRule'; LblHeightRule.Hide; EdtUpperHeight.Show; end; end; end; end.

    Read the article

  • Connecting to SQL Server in Php - Extension Err

    - by John Doe
    <html> <head> <title>Connecting </title> </head> <body> <?php $host = "*.*.*.*"; $username = "xxx"; $password = "xxx"; $db_name = "xxx"; $db = mssql_connect($host, $username,$password) or die("Couldnt Connect"); $selected = mssql_select_db($db_name, $db) or die("Couldnt open database"); ?> </body> </html> My error message is: Fatal error: Call to undefined function mssql_connect() in C:\wamp\www\php\dbase.php on line 12 I am using WampServer 2.0 on Php 5.3.0 When I check the extensions, php_mssql is Checked. I also checked the php.ini file to make sure it is not commented out. I have my file dbase.php saved in C:\wamp\www\php. I have tried stopping the service, closing everything, and running it again. I know the problem is that the extension file is not being included somehow. The below is copied from my php.ini file. Note I made all http = /http to avoid posting Links. ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; Windows: "\path1;\path2" include_path = "C:\wamp\bin\php\php5.3.0\ext" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; /http://php.net/include-path ; The root of the PHP pages, used only if nonempty. ; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_redirect configuration below ; /http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username used only ; if nonempty. ; /http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; /http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" extension_dir = "c:/wamp/bin/php/php5.3.0/ext/" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithreaded servers, such as IIS or Zeus, and is automatically ; disabled on them. ; /http://php.net/enable-dl enable_dl = Off ; cgi.force_redirect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefined, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; You CAN safely turn this off for IIS, in fact, you MUST. ; /http://php.net/cgi.force-redirect ;cgi.force_redirect = 1 ; if cgi.nph is enabled it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY need to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; /http://php.net/cgi.redirect-status-env ;cgi.redirect_status_env = ; ; cgi.fix_pathinfo provides real PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; /http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; /http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1; ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is supported by Apache. When this option is set to 1 PHP will send ; RFC2616 compliant header. ; Default is zero. ; /http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; /http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP uploaded files (will use system default if not ; specified). ; /http://php.net/upload-tmp-dir upload_tmp_dir = "c:/wamp/tmp" ; Maximum allowed size for uploaded files. ; /http://php.net/upload-max-filesize upload_max_filesize = 2M Also, my php.ini file is saved in: C:\wamp\bin\apache\Apache2.2.11\bin

    Read the article

  • Connecting to mssql in Php - Extension Err

    - by John Doe
    <html> <head> <title>Connecting </title> </head> <body> <?php $host = "*.*.*.*"; $username = "xxx"; $password = "xxx"; $db_name = "xxx"; $db = mssql_connect($host, $username,$password) or die("Couldnt Connect"); $selected = mssql_select_db($db_name, $db) or die("Couldnt open database"); ?> </body> </html> My error message is: Fatal error: Call to undefined function mssql_connect() in C:\wamp\www\php\dbase.php on line 12 I am using WampServer 2.0 on Php 5.3.0 When I check the extensions, php_mssql is Checked. I also checked the php.ini file to make sure it is not commented out. I have my file dbase.php saved in C:\wamp\www\php. I have tried stopping the service, closing everything, and running it again. I know the problem is that the extension file is not being included somehow. The below is copied from my php.ini file. Note I made all http = /http to avoid posting Links. ;;;;;;;;;;;;;;;;;;;;;;;;; ; Paths and Directories ; ;;;;;;;;;;;;;;;;;;;;;;;;; ; UNIX: "/path1:/path2" ;include_path = ".:/php/includes" ; Windows: "\path1;\path2" include_path = "C:\wamp\bin\php\php5.3.0\ext" ; ; PHP's default setting for include_path is ".;/path/to/php/pear" ; /http://php.net/include-path ; The root of the PHP pages, used only if nonempty. ; if PHP was not compiled with FORCE_REDIRECT, you SHOULD set doc_root ; if you are running php as a CGI under any web server (other than IIS) ; see documentation for security issues. The alternate is to use the ; cgi.force_redirect configuration below ; /http://php.net/doc-root doc_root = ; The directory under which PHP opens the script using /~username used only ; if nonempty. ; /http://php.net/user-dir user_dir = ; Directory in which the loadable extensions (modules) reside. ; /http://php.net/extension-dir ; extension_dir = "./" ; On windows: ; extension_dir = "ext" extension_dir = "c:/wamp/bin/php/php5.3.0/ext/" ; Whether or not to enable the dl() function. The dl() function does NOT work ; properly in multithreaded servers, such as IIS or Zeus, and is automatically ; disabled on them. ; /http://php.net/enable-dl enable_dl = Off ; cgi.force_redirect is necessary to provide security running PHP as a CGI under ; most web servers. Left undefined, PHP turns this on by default. You can ; turn it off here AT YOUR OWN RISK ; You CAN safely turn this off for IIS, in fact, you MUST. ; /http://php.net/cgi.force-redirect ;cgi.force_redirect = 1 ; if cgi.nph is enabled it will force cgi to always sent Status: 200 with ; every request. PHP's default behavior is to disable this feature. ;cgi.nph = 1 ; if cgi.force_redirect is turned on, and you are not running under Apache or Netscape ; (iPlanet) web servers, you MAY need to set an environment variable name that PHP ; will look for to know it is OK to continue execution. Setting this variable MAY ; cause security issues, KNOW WHAT YOU ARE DOING FIRST. ; /http://php.net/cgi.redirect-status-env ;cgi.redirect_status_env = ; ; cgi.fix_pathinfo provides real PATH_INFO/PATH_TRANSLATED support for CGI. PHP's ; previous behaviour was to set PATH_TRANSLATED to SCRIPT_FILENAME, and to not grok ; what PATH_INFO is. For more information on PATH_INFO, see the cgi specs. Setting ; this to 1 will cause PHP CGI to fix its paths to conform to the spec. A setting ; of zero causes PHP to behave as before. Default is 1. You should fix your scripts ; to use SCRIPT_FILENAME rather than PATH_TRANSLATED. ; /http://php.net/cgi.fix-pathinfo ;cgi.fix_pathinfo=1 ; FastCGI under IIS (on WINNT based OS) supports the ability to impersonate ; security tokens of the calling client. This allows IIS to define the ; security context that the request runs under. mod_fastcgi under Apache ; does not currently support this feature (03/17/2002) ; Set to 1 if running under IIS. Default is zero. ; /http://php.net/fastcgi.impersonate ;fastcgi.impersonate = 1; ; Disable logging through FastCGI connection. PHP's default behavior is to enable ; this feature. ;fastcgi.logging = 0 ; cgi.rfc2616_headers configuration option tells PHP what type of headers to ; use when sending HTTP response code. If it's set 0 PHP sends Status: header that ; is supported by Apache. When this option is set to 1 PHP will send ; RFC2616 compliant header. ; Default is zero. ; /http://php.net/cgi.rfc2616-headers ;cgi.rfc2616_headers = 0 ;;;;;;;;;;;;;;;; ; File Uploads ; ;;;;;;;;;;;;;;;; ; Whether to allow HTTP file uploads. ; /http://php.net/file-uploads file_uploads = On ; Temporary directory for HTTP uploaded files (will use system default if not ; specified). ; /http://php.net/upload-tmp-dir upload_tmp_dir = "c:/wamp/tmp" ; Maximum allowed size for uploaded files. ; /http://php.net/upload-max-filesize upload_max_filesize = 2M Also, my php.ini file is saved in: C:\wamp\bin\apache\Apache2.2.11\bin

    Read the article

  • C++ Multithreading with pthread is blocking (including sockets)

    - by Sebastian Büttner
    I am trying to implement a multi threaded application with pthread. I did implement a thread class which looks like the following and I call it later twice (or even more), but it seems to block instead of execute the threads parallel. Here is what I got until now: The Thread Class is an abstract class which has the abstract method "exec" which should contain the thread code in a derive class (I did a sample of this, named DerivedThread) Thread.hpp #ifndef THREAD_H_ #define THREAD_H_ #include <pthread.h> class Thread { public: Thread(); void start(); void join(); virtual int exec() = 0; int exit_code(); private: static void* thread_router(void* arg); void exec_thread(); pthread_t pth_; int code_; }; #endif /* THREAD_H_ */ And Thread.cpp #include <iostream> #include "Thread.hpp" /*****************************/ using namespace std; Thread::Thread(): code_(0) { cout << "[Thread] Init" << endl; } void Thread::start() { cout << "[Thread] Created Thread" << endl; pthread_create( &pth_, NULL, Thread::thread_router, reinterpret_cast<void*>(this)); } void Thread::join() { cout << "[Thread] Join Thread" << endl; pthread_join(pth_, NULL); } int Thread::exit_code() { return code_; } void Thread::exec_thread() { cout << "[Thread] Execute" << endl; code_ = exec(); } void* Thread::thread_router(void* arg) { cout << "[Thread] exec_thread function in thread" << endl; reinterpret_cast<Thread*>(arg)->exec_thread(); return NULL; } DerivedThread.hpp #include "Thread.hpp" class DerivedThread : public Thread { public: DerivedThread(); virtual ~DerivedThread(); int exec(); void Close() = 0; DerivedThread.cpp [...] #include "DerivedThread.cpp" [...] int DerivedThread::exec() { //code to be executed do { cout << "Thread executed" << endl; usleep(1000000); } while (true); //dummy, just to let it run for a while } [...] Basically, I am calling this like the here: DerivedThread *thread; cout << "Creating Thread" << endl; thread = new DerivedThread(); cout << "Created thread, starting..." << endl; thread->start(); cout << "Started thread" << endl; cout << "Creating 2nd Thread" << endl; thread = new DerivedThread(); cout << "Created 2nd thread, starting..." << endl; thread->start(); cout << "Started 2nd thread" << endl; What is working great if I am only starting one of these Threads , but if I start multiple which should run together (not synced, only parallel) . But I discovered, that the thread is created, then as it tries to execute it (via start) the problem seems to block until the thread has closed. After that the next Thread is processed. I thought that pthread would do it unblocked for me, so what did I wrong? A sample output might be: Creating Thread [Thread] Thread Init Created thread, starting... [Thread] Created thread [Thread] exec_thread function in thread [Thread] Execute Thread executed Thread executed Thread executed Thread executed Thread executed Thread executed Thread executed .... Until Thread 1 is not terminated, a Thread 2 won't be created not executed. The process above is executed in an other class. Just for the information: I am trying to create a multi threaded server. The concept is like this: MultiThreadedServer Class has a main loop, like this one: ::inet::ServerSock *sock; //just a simple self made wrapper class for sockets DerivedThread *thread; for (;;) { sock = new ::inet::ServerSock(); this->Socket->accept( *sock ); cout << "Creating Thread" << endl; //Threads (according to code sample above) thread = new DerivedThread(sock); //I did not mentoine the parameter before as it was not neccesary, in fact, I pass the socket handle with the connected socket to the thread cout << "Created thread, starting..." << endl; thread->start(); cout << "Started thread" << endl; } So I thought that this would loop over and over and wait for new connections to accept. and when a new client arrives, I am creating a new thread and give the thread the connected socket as a parameter. In the DerivedThread::exec I am doing the handling for the connected client. Like: [...] do { [...] if (this-sock_-read( Buffer, sizeof(PacketStruc) ) 0) { cout << "[Handler_Base] Recv Packet" << endl; //handle the packet } else { Connected = false; } delete Buffer; } while ( Connected ); So I loop in the created thread as long as the client keeps the connection. I think, that the socket may cause the blocking behaviour. Edit: I figured out, that it is not the read() loop in the DerivedThread Class as I simply replaced it with a loop over a simple cout-usleep part. It did also only execute the first one and after first thread finished, the 2nd one was executed. Many thanks and best regards, Sebastian

    Read the article

  • how would you debug this javascript problem?

    - by pedalpete
    I've been travelling and developing for the past few weeks. The site I'm developing was running well. Then, the other day, i connected to a network and the page 'looked' fine, but it turns out the javascript wasn't running. I checked firebug, and there were no errors, as I was suspecting that maybe a script didn't load (I'm using the google api for jQuery and jQuery UI, as well as loading google maps api and fbconnect). I would suspect that if the issue was with one of these pages not loading I would get an error, and yet there was nothing. Thinking maybe i didn't connect properly or something, i reconnected to the network and even restarted my computer, as well as trying to run the local version. I got nothing. The local version not running also hinted to me that it was the loading of an external javascript which caused the problem. I let it pass as something strange with that one network. Unfortunately now I'm 100s of miles away. Today my brother sent me an e-mail that the network he was on at the airport wouldn't load my page. Same issue. Everything is laid out properly, and part of the layout is set in Javascript, so clearly javascript is running. he too got no errors. Of course, he got on his plane, and now he is no longer at the airport. Now the site works on his computer (and i haven't changed anything). How on earth would you go about figuring out what happened in this situation? That is two of maybe 12 or so networks. But I have no idea how i would find a network that doesn't work (and living in a small town, it could be difficult for me to find a network that doesn't work). Any ideas? The site is still in Dev, so I'd rather not post a link just yet (but could in a few days). What I can see not working is the javascript functions which are called on load, and on click. So i do think it is a javascript issue, but no errors. This wouldn't be as HUGE an issue if I could find and sit on one of these networks, but I can't. So what would you do? EDIT ---------------------------------------------------------- the first function(s - their linked) that doesn't get called is below. I've cut the code of at the .ajax call as the call wasn't being made. function getResultsFromForm(){ jQuery('form#filterList input.button').hide(); var searchAddress=jQuery('form#filterList input#searchTxt').val(); if(searchAddress=='' || searchAddress=='<?php echo $searchLocation; ?>'){ mapShow(20, -40, 0, 'areaMap', 2); jQuery('form#filterList input.button').show(); return; } if (GBrowserIsCompatible()) { var geo = new GClientGeocoder(); geo.setBaseCountryCode(cl.address.country); geo.getLocations(searchAddress, function (result) { if(!result.Placemark && searchAddress!='<?php echo $searchLocation; ?>'){ jQuery('span#addressNotFound').text('<?php echo $addressNotFound; ?>').slideDown('slow'); jQuery('form#filterList input.button').show(); } else { jQuery('span#addressNotFound').slideUp('slow').empty(); jQuery('span#headerLocal').text(searchAddress); var date = new Date(); date.setTime(date.getTime() + (8 * 24 * 60 * 60 * 1000)); jQuery.cookie('address', searchAddress, { expires: date}); var accuracy= result.Placemark[0].AddressDetails.Accuracy; var lat = result.Placemark[0].Point.coordinates[1]; var long = result.Placemark[0].Point.coordinates[0]; lat=parseFloat(lat); long=parseFloat(long); var getTab=jQuery('div#tabs div#active').attr('class'); jQuery('div#tabs').show(); loadForecast(lat, long, getTab, 'true', 0); var zoom=zoomLevel(); mapShow(lat, long, accuracy, 'areaMap', zoom ); } }); } } function zoomLevel(){ var zoomarray= new Array(); zoomarray=jQuery('span.viewDist').attr('id'); zoomarray=zoomarray.split("-"); var zoom=zoomarray[1]; if(zoom==''){ zoom=5; } zoom=parseFloat(zoom); return(zoom); } function loadForecast(lat, long, type, loadForecast, page){ jQuery('div#holdForecast').empty(); var date = new Date(); var d = date.getDate(); var day = (d < 10) ? '0' + d : d; var m = date.getMonth() + 1; var month = (m < 10) ? '0' + m : m; var year='2009'; toDate=year+'-'+month+'-'+day; var genre=jQuery('span.genreblock span#updateGenre').html(); var numDays=''; var numResults=''; var range=jQuery('span.viewDist').attr('id'); var dateRange = jQuery('.updateDate').attr('id'); jQuery('div#holdShows ul.showList').html('<li class="show"><div class="showData"><center><img src="../hwImages/loading.gif"/></center></div></li>'); jQuery('div#holdShows ul.'+type+'List').livequery(function(){ jQuery.ajax({ type: "GET", url: "processes/formatShows.php", data: "output=&genre="+genre+"&numResults="+numResults+"&date="+toDate+"&dateRange="+dateRange+"&range="+range+"&lat="+lat+"&long="+long+'&page='+page, success: function(response){ EDIT 2 ----------------------------------------------------------------------------- Please keep in mind that the problem is not that I can't load the site, the site works fine on most connections, but there are times when the site doesn't work, and no errors are thrown, and nothing changes. My brother couldn't run it earlier today while I had no problems, so it was something to do with his location/network. HOWEVER, the page loads, he had a connection, it was his first time visiting the site, so nothing could have been cashed. Same with when I had the issue a few days before. I didn't change anything, and I got to a different network and everything worked fine.

    Read the article

  • Why Look and feel is not getting updated properly?

    - by swift
    I’m developing a swing application in which I have an option to change the Look and feel of the application on click of a button. Now my problem is when I click the button to change the theme it’s not properly updating the L&F of my app, say my previous theme is “noire” and I choose “MCWin” after it, but the style of the noire theme is still there Here is sample working code: package whiteboard; import java.awt.GridBagLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.ComponentEvent; import java.awt.event.ComponentListener; import javax.swing.JFrame; import javax.swing.JLayeredPane; import javax.swing.JMenu; import javax.swing.JMenuBar; import javax.swing.JMenuItem; import javax.swing.SwingUtilities; import javax.swing.UIManager; import javax.swing.WindowConstants; public class DiscussionBoard extends JFrame implements ComponentListener,ActionListener { // Variables declaration private JMenuItem audioMenuItem; private JMenuItem boardMenuItem; private JMenuItem exitMenuItem; private JMenuItem clientsMenuItem; private JMenuItem acryl; private JMenuItem hifi; private JMenuItem aero; private JMenuItem aluminium; private JMenuItem bernstein; private JMenuItem fast; private JMenuItem graphite; private JMenuItem luna; private JMenuItem mcwin; private JMenuItem noire; private JMenuItem smart; private JMenuBar boardMenuBar; private JMenuItem messengerMenuItem; private JMenu openMenu; private JMenu saveMenu; private JMenu themesMenu; private JMenuItem saveMessengerMenuItem; private JMenuItem saveWhiteboardMenuItem; private JMenu userMenu; JLayeredPane layerpane; /** Creates new form discussionBoard * @param connection */ public DiscussionBoard() { initComponents(); setLocationRelativeTo(null); addComponentListener(this); } private void initComponents() { boardMenuBar = new JMenuBar(); openMenu = new JMenu(); themesMenu = new JMenu(); messengerMenuItem = new JMenuItem(); boardMenuItem = new JMenuItem(); audioMenuItem = new JMenuItem(); saveMenu = new JMenu(); saveMessengerMenuItem = new JMenuItem(); saveWhiteboardMenuItem = new JMenuItem(); userMenu = new JMenu(); clientsMenuItem = new JMenuItem(); exitMenuItem = new JMenuItem(); setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); setLayout(new GridBagLayout()); setResizable(false); setTitle("Discussion Board"); openMenu.setText("Open"); saveMenu.setText("Save"); themesMenu.setText("Themes"); acryl = new JMenuItem("Acryl"); hifi = new JMenuItem("HiFi"); aero = new JMenuItem("Aero"); aluminium = new JMenuItem("Aluminium"); bernstein = new JMenuItem("Bernstein"); fast = new JMenuItem("Fast"); graphite = new JMenuItem("Graphite"); luna = new JMenuItem("Luna"); mcwin = new JMenuItem("MCwin"); noire = new JMenuItem("Noire"); smart = new JMenuItem("Smart"); hifi.addActionListener(this); acryl.addActionListener(this); aero.addActionListener(this); aluminium.addActionListener(this); bernstein.addActionListener(this); fast.addActionListener(this); graphite.addActionListener(this); luna.addActionListener(this); mcwin.addActionListener(this); noire.addActionListener(this); smart.addActionListener(this); messengerMenuItem.setText("Messenger"); openMenu.add(messengerMenuItem); openMenu.add(boardMenuItem); audioMenuItem.setText("Audio Messenger"); openMenu.add(audioMenuItem); exitMenuItem.setText("Exit"); exitMenuItem.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent evt) { exitMenuItemActionPerformed(evt); } }); openMenu.add(exitMenuItem); boardMenuBar.add(openMenu); saveMessengerMenuItem.setText("Messenger"); saveMenu.add(saveMessengerMenuItem); saveWhiteboardMenuItem.setText("Whiteboard"); saveMenu.add(saveWhiteboardMenuItem); boardMenuBar.add(saveMenu); userMenu.setText("Users"); clientsMenuItem.setText("Current Session"); userMenu.add(clientsMenuItem); themesMenu.add(acryl); themesMenu.add(hifi); themesMenu.add(aero); themesMenu.add(aluminium); themesMenu.add(bernstein); themesMenu.add(fast); themesMenu.add(graphite); themesMenu.add(luna); themesMenu.add(mcwin); themesMenu.add(noire); themesMenu.add(smart); boardMenuBar.add(userMenu); boardMenuBar.add(themesMenu); saveMessengerMenuItem.setEnabled(false); saveWhiteboardMenuItem.setEnabled(false); setJMenuBar(boardMenuBar); setSize(1024, 740); setVisible(true); } protected void exitMenuItemActionPerformed(ActionEvent evt) { System.exit(0); } @Override public void componentHidden(ComponentEvent arg0) { } @Override public void componentMoved(ComponentEvent e) { } @Override public void componentResized(ComponentEvent arg0) { } @Override public void componentShown(ComponentEvent arg0) { } @Override public void actionPerformed(ActionEvent e) { try { if(e.getSource()==hifi) { UIManager.setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel"); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.hifi.HiFiLookAndFeel"); enableTheme(); hifi.setEnabled(false); } else if(e.getSource()==acryl) { UIManager.setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel"); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.acryl.AcrylLookAndFeel"); enableTheme(); acryl.setEnabled(false); } else if(e.getSource()==aero) { UIManager.setLookAndFeel("javax.swing.plaf.metal.MetalLookAndFeel"); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.aero.AeroLookAndFeel"); enableTheme(); aero.setEnabled(false); } else if(e.getSource()==aluminium) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.aluminium.AluminiumLookAndFeel"); enableTheme(); aluminium.setEnabled(false); } else if(e.getSource()==bernstein) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.bernstein.BernsteinLookAndFeel"); enableTheme(); bernstein.setEnabled(false); } else if(e.getSource()==fast) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.fast.FastLookAndFeel"); enableTheme(); fast.setEnabled(false); } else if(e.getSource()==graphite) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.graphite.GraphiteLookAndFeel"); enableTheme(); graphite.setEnabled(false); } else if(e.getSource()==luna) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.luna.LunaLookAndFeel"); enableTheme(); luna.setEnabled(false); } else if(e.getSource()==mcwin) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.mcwin.McWinLookAndFeel"); enableTheme(); mcwin.setEnabled(false); } else if(e.getSource()==noire) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.noire.NoireLookAndFeel"); enableTheme(); noire.setEnabled(false); } else if(e.getSource()==smart) { UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); SwingUtilities.updateComponentTreeUI(getRootPane()); UIManager.setLookAndFeel("com.jtattoo.plaf.smart.SmartLookAndFeel"); enableTheme(); smart.setEnabled(false); } SwingUtilities.updateComponentTreeUI(getRootPane()); } catch (Exception ex) { ex.printStackTrace(); } } private void enableTheme() { acryl.setEnabled(true); hifi.setEnabled(true); aero.setEnabled(true); aluminium.setEnabled(true); bernstein.setEnabled(true); fast.setEnabled(true); graphite.setEnabled(true); luna.setEnabled(true); mcwin.setEnabled(true); noire.setEnabled(true); smart.setEnabled(true); } public static void main(String []ar) { try { UIManager.setLookAndFeel("com.jtattoo.plaf.acryl.AcrylLookAndFeel"); } catch (Exception e) { e.printStackTrace(); } new DiscussionBoard(); } } What’s the problem here? why its not getting updated? There is a demo application here which is exactly doing what i want but i cant get a clear idea of it.

    Read the article

  • How to connect to bluetoothbee device using j2me?

    - by user1500412
    I developed a simple bluetooth connection application in j2me. I try it on emulator, both server and client can found each other, but when I deploy the application to blackberry mobile phone and connect to a bluetoothbee device it says service search no records. What could it be possibly wrong? is it j2me can not find a service in bluetoothbee? The j2me itself succeed to found the bluetoothbee device, but why it can not find the service? My code is below. What I don't understand is the UUID? how to set UUID for unknown source? since I didn't know the UUID for the bluetoothbee device. class SearchingDevice extends Canvas implements Runnable,CommandListener,DiscoveryListener{ //...... public SearchingDevice(MenuUtama midlet, Display display){ this.display = display; this.midlet = midlet; t = new Thread(this); t.start(); timer = new Timer(); task = new TestTimerTask(); /*--------------------Device List------------------------------*/ select = new Command("Pilih",Command.OK,0); back = new Command("Kembali",Command.BACK,0); btDevice = new List("Pilih Device",Choice.IMPLICIT); btDevice.addCommand(select); btDevice.addCommand(back); btDevice.setCommandListener(this); /*------------------Input Form---------------------------------*/ formInput = new Form("Form Input"); nama = new TextField("Nama","",50,TextField.ANY); umur = new TextField("Umur","",50,TextField.ANY); measure = new Command("Ukur",Command.SCREEN,0); gender = new ChoiceGroup("Jenis Kelamin",Choice.EXCLUSIVE); formInput.addCommand(back); formInput.addCommand(measure); gender.append("Pria", null); gender.append("Wanita", null); formInput.append(nama); formInput.append(umur); formInput.append(gender); formInput.setCommandListener(this); /*---------------------------------------------------------------*/ findDevice(); } /*----------------Gambar screen searching device---------------------------------*/ protected void paint(Graphics g) { g.setColor(0,0,0); g.fillRect(0, 0, getWidth(), getHeight()); g.setColor(255,255,255); g.drawString("Mencari Device", 20, 20, Graphics.TOP|Graphics.LEFT); if(this.counter == 1){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); } if(this.counter == 2){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); } if(this.counter == 3){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); g.setColor(255,115,200); g.fillRect(100, 60, 20, 60); } if(this.counter == 4){ g.setColor(255,115,200); g.fillRect(20, 100, 20, 20); g.setColor(100,255,255); g.fillRect(60, 80, 20, 40); g.setColor(255,115,200); g.fillRect(100, 60, 20, 60); g.setColor(100,255,255); g.fillRect(140, 40, 20, 80); //display.callSerially(this); } } /*--------- Running Searching Screen ----------------------------------------------*/ public void run() { while(run){ this.counter++; if(counter > 4){ this.counter = 1; } try { Thread.sleep(1000); } catch (InterruptedException ex) { System.out.println("interrupt"+ex.getMessage()); } repaint(); } } /*-----------------------------cari device bluetooth yang -------------------*/ public void findDevice(){ try { devices = new java.util.Vector(); local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); local.setDiscoverable(DiscoveryAgent.GIAC); agent.startInquiry(DiscoveryAgent.GIAC, this); } catch (BluetoothStateException ex) { System.out.println("find device"+ex.getMessage()); } } /*-----------------------------jika device ditemukan--------------------------*/ public void deviceDiscovered(RemoteDevice rd, DeviceClass dc) { devices.addElement(rd); } /*--------------Selesai tes koneksi ke bluetooth server--------------------------*/ public void inquiryCompleted(int param) { switch(param){ case DiscoveryListener.INQUIRY_COMPLETED: //inquiry completed normally if(devices.size()>0){ //at least one device has been found services = new java.util.Vector(); this.findServices((RemoteDevice)devices.elementAt(0)); this.run = false; do_alert("Inquiry completed",4000); }else{ do_alert("No device found in range",4000); } break; case DiscoveryListener.INQUIRY_ERROR: do_alert("Inquiry error",4000); break; case DiscoveryListener.INQUIRY_TERMINATED: do_alert("Inquiry canceled",4000); break; } } /*-------------------------------Cari service bluetooth server----------------------------*/ public void findServices(RemoteDevice device){ try { // int[] attributes = {0x100,0x101,0x102}; UUID[] uuids = new UUID[1]; //alamat server uuids[0] = new UUID("F0E0D0C0B0A000908070605040302010",false); //uuids[0] = new UUID("8841",true); //menyiapkan device lokal local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); //mencari service dari server agent.searchServices(null, uuids, device, this); //server = (StreamConnectionNotifies)Connector.open(url.toString()); } catch (BluetoothStateException ex) { // ex.printStackTrace(); System.out.println("Errorx"+ex.getMessage()); } } /*---------------------------Pencarian service selesai------------------------*/ public void serviceSearchCompleted(int transID, int respCode) { switch(respCode){ case DiscoveryListener.SERVICE_SEARCH_COMPLETED: if(currentDevice == devices.size() - 1){ if(services.size() > 0){ this.run = false; display.setCurrent(btDevice); do_alert("Service found",4000); }else{ do_alert("The service was not found",4000); } }else{ currentDevice++; this.findServices((RemoteDevice)devices.elementAt(currentDevice)); } break; case DiscoveryListener.SERVICE_SEARCH_DEVICE_NOT_REACHABLE: do_alert("Device not Reachable",4000); break; case DiscoveryListener.SERVICE_SEARCH_ERROR: do_alert("Service search error",4000); break; case DiscoveryListener.SERVICE_SEARCH_NO_RECORDS: do_alert("No records return",4000); break; case DiscoveryListener.SERVICE_SEARCH_TERMINATED: do_alert("Inquiry canceled",4000); break; } } public void servicesDiscovered(int i, ServiceRecord[] srs) { for(int x=0; x<srs.length;x++) services.addElement(srs[x]); try { btDevice.append(((RemoteDevice)devices.elementAt(currentDevice)).getFriendlyName(false),null); } catch (IOException ex) { System.out.println("service discover"+ex.getMessage()); } } public void do_alert(String msg, int time_out){ if(display.getCurrent() instanceof Alert){ ((Alert)display.getCurrent()).setString(msg); ((Alert)display.getCurrent()).setTimeout(time_out); }else{ Alert alert = new Alert("Bluetooth"); alert.setString(msg); alert.setTimeout(time_out); display.setCurrent(alert); } } private String getData(){ System.out.println("getData"); String cmd=""; try { ServiceRecord service = (ServiceRecord)services.elementAt(btDevice.getSelectedIndex()); String url = service.getConnectionURL(ServiceRecord.NOAUTHENTICATE_NOENCRYPT, false); conn = (StreamConnection)Connector.open(url); DataInputStream in = conn.openDataInputStream(); int i=0; timer.schedule(task, 15000); char c1; while(time){ //while(((c1 = in.readChar())>0) && (c1 != '\n')){ //while(((c1 = in.readChar())>0) ){ c1 = in.readChar(); cmd = cmd + c1; //System.out.println(c1); // } } System.out.print("cmd"+cmd); if(time == false){ in.close(); conn.close(); } } catch (IOException ex) { System.err.println("Cant read data"+ex); } return cmd; } //timer task fungsinya ketika telah mencapai waktu yg dijadwalkan putus koneksi private static class TestTimerTask extends TimerTask{ public TestTimerTask() { } public void run() { time = false; } } }

    Read the article

  • Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument]

    - by Sean Ochoa
    So, I'm using the XSLT plugin for JQuery, and here's my code: function AddPlotcardEventHandlers(){ // some code } function reportError(exception){ alert(exception.constructor.name + " Exception: " + ((exception.name) ? exception.name : "[unknown name]") + " - " + exception.message); } function GetPlotcards(){ $("#content").xslt("../xml/plotcards.xml","../xslt/plotcards.xsl", AddPlotcardEventHandlers,reportError); } Here's the modified jquery plugin. I say that its modified because I've added callbacks for success and error handling. /* * jquery.xslt.js * * Copyright (c) 2005-2008 Johann Burkard (<mailto:[email protected]>) * <http://eaio.com> * * Permission is hereby granted, free of charge, to any person obtaining a * copy of this software and associated documentation files (the "Software"), * to deal in the Software without restriction, including without limitation * the rights to use, copy, modify, merge, publish, distribute, sublicense, * and/or sell copies of the Software, and to permit persons to whom the * Software is furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included * in all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN * NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE * USE OR OTHER DEALINGS IN THE SOFTWARE. * */ /** * jQuery client-side XSLT plugins. * * @author <a href="mailto:[email protected]">Johann Burkard</a> * @version $Id: jquery.xslt.js,v 1.10 2008/08/29 21:34:24 Johann Exp $ */ (function($) { $.fn.xslt = function() { return this; } var str = /^\s*</; if (document.recalc) { // IE 5+ $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var change = function() { try{ var c = 'complete'; if (xm.readyState == c && xs.readyState == c) { window.setTimeout(function() { target.html(xm.transformNode(xs.XMLDocument)); if (onSuccess) onSuccess(); }, 50); } }catch(exception){ if (onError) onError(exception); } }; var xm = document.createElement('xml'); xm.onreadystatechange = change; xm[str.test(xml) ? "innerHTML" : "src"] = xml; var xs = document.createElement('xml'); xs.onreadystatechange = change; xs[str.test(xslt) ? "innerHTML" : "src"] = xslt; $('body').append(xm).append(xs); return this; }catch(exception){ if (onError) onError(exception); } }; } else if (window.DOMParser != undefined && window.XMLHttpRequest != undefined && window.XSLTProcessor != undefined) { // Mozilla 0.9.4+, Opera 9+ var processor = new XSLTProcessor(); var support = false; if ($.isFunction(processor.transformDocument)) { support = window.XMLSerializer != undefined; } else { support = true; } if (support) { $.fn.xslt = function(xml, xslt, onSuccess, onError) { try{ var target = $(this); var transformed = false; var xm = { readyState: 4 }; var xs = { readyState: 4 }; var change = function() { try{ if (xm.readyState == 4 && xs.readyState == 4 && !transformed) { var processor = new XSLTProcessor(); if ($.isFunction(processor.transformDocument)) { // obsolete Mozilla interface resultDoc = document.implementation.createDocument("", "", null); processor.transformDocument(xm.responseXML, xs.responseXML, resultDoc, null); target.html(new XMLSerializer().serializeToString(resultDoc)); } else { processor.importStylesheet(xs.responseXML); resultDoc = processor.transformToFragment(xm.responseXML, document); target.empty().append(resultDoc); } transformed = true; if (onSuccess) onSuccess(); } }catch(exception){ if (onError) onError(exception); } }; if (str.test(xml)) { xm.responseXML = new DOMParser().parseFromString(xml, "text/xml"); } else { xm = $.ajax({ dataType: "xml", url: xml}); xm.onreadystatechange = change; } if (str.test(xslt)) { xs.responseXML = new DOMParser().parseFromString(xslt, "text/xml"); change(); } else { xs = $.ajax({ dataType: "xml", url: xslt}); xs.onreadystatechange = change; } }catch(exception){ if (onError) onError(exception); }finally{ return this; } }; } } })(jQuery); And, here's my error msg: Object Exception: [unknown name] - Component returned failure code: 0x80600011 [nsIXSLTProcessorObsolete.transformDocument] Here's the info on the browser that I'm using for testing (with firebug v1.5.4 add-on installed): Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Here's my XML: <?xml version="1.0" encoding="ISO-8859-1"?> <plotcardCollection sortby="order"> <plotcard order="2" id="1378"> <name><![CDATA[[placeholder for name of plotcard 1378]]]></name> <content><![CDATA[[placeholder for content of plotcard 1378]]]></content> <tagCollection> <tag id="3"><![CDATA[[placeholder for tag with id=3]]]></tag> <tag id="7"><![CDATA[[placeholder for tag with id=7]]]></tag> </tagCollection> </plotcard> <plotcard order="1" id="2156"> <name><![CDATA[[placeholder for name of plotcard 2156]]]></name> <content><![CDATA[[placeholder for content of plotcard 2156]]]></content> <tagCollection> <tag id="2"><![CDATA[[placeholder for tag with id=2]]]></tag> <tag id="9"><![CDATA[[placeholder for tag with id=9]]]></tag> </tagCollection> </plotcard> </plotcardCollection> Here's my XSLT: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="/plotcardCollection"> <xsl:variable name="sortby" select="@sortby" /> <xsl:for-each select="plotcard"> <xsl:sort select="$sortby" data-type="number" order="ascending"/> <div> <!-- Start Plotcard --> <xsl:attribute name="class">Plotcard</xsl:attribute> <xsl:for-each select="@"> <xsl:value-of select="name()"/> <xsl:text>='</xsl:text> <xsl:if test="name() = 'id'"> <xsl:text>Plotcard-</xsl:text> </xsl:if> <xsl:value-of select="." /> <xsl:text>'</xsl:text> </xsl:for-each> <!-- Start Plotcard Name Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardName</xsl:text> </xsl:attribute> <xsl:value-of select="name/text()"/> </div> <!-- Start Plotcard Content Section --> <div> <xsl:attribute name="class"> <xsl:text disable-output-escaping="yes">PlotcardContent</xsl:text> </xsl:attribute> <xsl:value-of select="content/text()"/> </div> </div> </xsl:for-each> </xsl:template> </xsl:stylesheet> I'm really not sure what to do about this.... any thoughts?

    Read the article

  • Setting up a pc bluetooth server for android

    - by Del
    Alright, I've been reading a lot of topics the past two or three days and nothing seems to have asked this. I am writing a PC side server for my andriod device, this is for exchanging some information and general debugging. Eventually I will be connecting to a SPP device to control a microcontroller. I have managed, using the following (Android to pc) to connect to rfcomm channel 11 and exchange data between my android device and my pc. Method m = device.getClass().getMethod("createRfcommSocket", new Class[] { int.class }); tmp = (BluetoothSocket) m.invoke(device, Integer.valueOf(11)); I have attempted the createRfcommSocketToServiceRecord(UUID) method, with absolutely no luck. For the PC side, I have been using the C Bluez stack for linux. I have the following code which registers the service and opens a server socket: int main(int argc, char **argv) { struct sockaddr_rc loc_addr = { 0 }, rem_addr = { 0 }; char buf[1024] = { 0 }; char str[1024] = { 0 }; int s, client, bytes_read; sdp_session_t *session; socklen_t opt = sizeof(rem_addr); session = register_service(); s = socket(AF_BLUETOOTH, SOCK_STREAM, BTPROTO_RFCOMM); loc_addr.rc_family = AF_BLUETOOTH; loc_addr.rc_bdaddr = *BDADDR_ANY; loc_addr.rc_channel = (uint8_t) 11; bind(s, (struct sockaddr *)&loc_addr, sizeof(loc_addr)); listen(s, 1); client = accept(s, (struct sockaddr *)&rem_addr, &opt); ba2str( &rem_addr.rc_bdaddr, buf ); fprintf(stderr, "accepted connection from %s\n", buf); memset(buf, 0, sizeof(buf)); bytes_read = read(client, buf, sizeof(buf)); if( bytes_read 0 ) { printf("received [%s]\n", buf); } sprintf(str,"to Android."); printf("sent [%s]\n",str); write(client, str, sizeof(str)); close(client); close(s); sdp_close( session ); return 0; } sdp_session_t *register_service() { uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; uint8_t rfcomm_channel = 11; const char *service_name = "Remote Host"; const char *service_dsc = "What the remote should be connecting to."; const char *service_prov = "Your mother"; uuid_t root_uuid, l2cap_uuid, rfcomm_uuid, svc_uuid; sdp_list_t *l2cap_list = 0, *rfcomm_list = 0, *root_list = 0, *proto_list = 0, *access_proto_list = 0; sdp_data_t *channel = 0, *psm = 0; sdp_record_t *record = sdp_record_alloc(); // set the general service ID sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); sdp_set_service_id( record, svc_uuid ); // make the service record publicly browsable sdp_uuid16_create(&root_uuid, PUBLIC_BROWSE_GROUP); root_list = sdp_list_append(0, &root_uuid); sdp_set_browse_groups( record, root_list ); // set l2cap information sdp_uuid16_create(&l2cap_uuid, L2CAP_UUID); l2cap_list = sdp_list_append( 0, &l2cap_uuid ); proto_list = sdp_list_append( 0, l2cap_list ); // set rfcomm information sdp_uuid16_create(&rfcomm_uuid, RFCOMM_UUID); channel = sdp_data_alloc(SDP_UINT8, &rfcomm_channel); rfcomm_list = sdp_list_append( 0, &rfcomm_uuid ); sdp_list_append( rfcomm_list, channel ); sdp_list_append( proto_list, rfcomm_list ); // attach protocol information to service record access_proto_list = sdp_list_append( 0, proto_list ); sdp_set_access_protos( record, access_proto_list ); // set the name, provider, and description sdp_set_info_attr(record, service_name, service_prov, service_dsc); int err = 0; sdp_session_t *session = 0; // connect to the local SDP server, register the service record, and // disconnect session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); err = sdp_record_register(session, record, 0); // cleanup //sdp_data_free( channel ); sdp_list_free( l2cap_list, 0 ); sdp_list_free( rfcomm_list, 0 ); sdp_list_free( root_list, 0 ); sdp_list_free( access_proto_list, 0 ); return session; } And another piece of code, in addition to 'sdptool browse local' which can verifty that the service record is running on the pc: int main(int argc, char **argv) { uuid_t svc_uuid; uint32_t svc_uuid_int[] = { 0x00000000,0x00000000,0x00000000,0x00000000 }; int err; bdaddr_t target; sdp_list_t *response_list = NULL, *search_list, *attrid_list; sdp_session_t *session = 0; str2ba( "01:23:45:67:89:AB", &target ); // connect to the SDP server running on the remote machine session = sdp_connect( BDADDR_ANY, BDADDR_LOCAL, SDP_RETRY_IF_BUSY ); // specify the UUID of the application we're searching for sdp_uuid128_create( &svc_uuid, &svc_uuid_int ); search_list = sdp_list_append( NULL, &svc_uuid ); // specify that we want a list of all the matching applications' attributes uint32_t range = 0x0000ffff; attrid_list = sdp_list_append( NULL, &range ); // get a list of service records that have UUID 0xabcd err = sdp_service_search_attr_req( session, search_list, \ SDP_ATTR_REQ_RANGE, attrid_list, &response_list); sdp_list_t *r = response_list; // go through each of the service records for (; r; r = r-next ) { sdp_record_t *rec = (sdp_record_t*) r-data; sdp_list_t *proto_list; // get a list of the protocol sequences if( sdp_get_access_protos( rec, &proto_list ) == 0 ) { sdp_list_t *p = proto_list; // go through each protocol sequence for( ; p ; p = p-next ) { sdp_list_t *pds = (sdp_list_t*)p-data; // go through each protocol list of the protocol sequence for( ; pds ; pds = pds-next ) { // check the protocol attributes sdp_data_t *d = (sdp_data_t*)pds-data; int proto = 0; for( ; d; d = d-next ) { switch( d-dtd ) { case SDP_UUID16: case SDP_UUID32: case SDP_UUID128: proto = sdp_uuid_to_proto( &d-val.uuid ); break; case SDP_UINT8: if( proto == RFCOMM_UUID ) { printf("rfcomm channel: %d\n",d-val.int8); } break; } } } sdp_list_free( (sdp_list_t*)p-data, 0 ); } sdp_list_free( proto_list, 0 ); } printf("found service record 0x%x\n", rec-handle); sdp_record_free( rec ); } sdp_close(session); } Output: $ ./search rfcomm channel: 11 found service record 0x10008 sdptool: Service Name: Remote Host Service Description: What the remote should be connecting to. Service Provider: Your mother Service RecHandle: 0x10008 Protocol Descriptor List: "L2CAP" (0x0100) "RFCOMM" (0x0003) Channel: 11 And for logcat I'm getting this: 07-22 15:57:06.087: ERROR/BTLD(215): ****************search UUID = 0000*********** 07-22 15:57:06.087: INFO//system/bin/btld(209): btapp_dm_GetRemoteServiceChannel() 07-22 15:57:06.087: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:06.097: INFO/ActivityManager(88): Displayed activity com.example.socktest/.socktest: 79 ms (total 79 ms) 07-22 15:57:06.697: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:07.517: WARN/BTLD(215): ccb timer ticks: 2147483648 07-22 15:57:07.517: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:07.547: WARN/BTLD(215): info:x10 07-22 15:57:07.547: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 10 pbytes (hdl 14) 07-22 15:57:07.547: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up() 07-22 15:57:07.547: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_up: dummy_handle = 342 07-22 15:57:07.547: DEBUG/ADAPTER(253): adapter_get_device(00:02:72:AB:7C:EE) 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:07.547: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:07.777: WARN/BTLD(215): process_service_search_attr_rsp 07-22 15:57:07.787: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 13 pbytes (hdl 14) 07-22 15:57:07.787: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_rmt_service_channel: success=0, service=00000000 07-22 15:57:07.787: ERROR/DTUN_HCID_BZ4(253): discovery unsuccessful! 07-22 15:57:08.497: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:09.507: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Wake, 0x8003 #### 07-22 15:57:09.597: INFO/BTL-IFS(215): send_ctrl_msg: [BTL_IFS CTRL] send BTLIF_DTUN_SIGNAL_EVT (CTRL) 11 pbytes (hdl 14) 07-22 15:57:09.597: DEBUG/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down() 07-22 15:57:09.597: INFO/DTUN_HCID_BZ4(253): dtun_dm_sig_link_down device = 0xf7a0 handle = 342 reason = 22 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): pollData[0] is revented, check next one 07-22 15:57:09.597: ERROR/BluetoothEventLoop.cpp(88): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/253/hci0/dev_00_02_72_AB_7C_EE 07-22 15:57:09.597: DEBUG/BluetoothA2dpService(88): Received intent Intent { act=android.bluetooth.device.action.ACL_DISCONNECTED (has extras) } 07-22 15:57:10.107: INFO//system/bin/btld(209): ##### USerial_Ioctl: BT_Sleep, 0x8004 #### 07-22 15:57:12.107: DEBUG/BluetoothService(88): Cleaning up failed UUID channel lookup: 00:02:72:AB:7C:EE 00000000-0000-0000-0000-000000000000 07-22 15:57:12.107: ERROR/Socket Test(5234): connect() failed 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: fd 31 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): __close_prot_rfcomm: bind not completed on this socket 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: fd (-1:31), bta -1, rc 0, wflags 0x0 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btlif_signal_event: event BTLIF_BTS_EVT_ABORT matched 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wrp_close_s_only [31] (31:-1) [] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: data socket closed 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wsactive_del: delete wsock 31 from active list [ad3e1494] 07-22 15:57:12.107: DEBUG/BTL_IFC_WRP(5234): wrp_close_s_only: wsock fully closed, return to pool 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): btsk_free: success 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_destroy 07-22 15:57:12.107: DEBUG/ASOCKWRP(5234): asocket_abort [31,32,33] 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: s 31, how 2 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_shutdown: btsk not found, normal close (31) 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_write: wrote 1 bytes out of 1 on fd 33 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 33 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (33) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 32 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (32) 07-22 15:57:12.107: INFO/BLZ20_WRAPPER(5234): blz20_wrp_close: s 31 07-22 15:57:12.107: DEBUG/BLZ20_WRAPPER(5234): blz20_wrp_close: btsk not found, normal close (31) 07-22 15:57:12.157: DEBUG/Sensors(88): close_akm, fd=151 07-22 15:57:12.167: ERROR/CachedBluetoothDevice(477): onUuidChanged: Time since last connect14970690 07-22 15:57:12.237: DEBUG/Socket Test(5234): -On Stop- Sorry for bombarding you guys with what seems like a difficult question and a lot to read, but I've been working on this problem for a while and I've tried a lot of different things to get this working. Let me reiterate, I can get it to work, but not using service discovery protocol. I've tried a several different UUIDs and on two different computers, although I only have my HTC Incredible to test with. I've also heard some rumors that the BT stack wasn't working on the HTC Droid, but that isn't the case, at least, for PC interaction.

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • how to display user name in login name control

    - by user569285
    I have a master page that holds the loginview content that appears on all subsequent pages based on the master page. i have a username control also nested in the loginview to display the name of the user when they are logged in. the code for the loginview from the master page is displayed as follows: <div class="loginView"> <asp:LoginView ID="MasterLoginView" runat="server"> <LoggedInTemplate> Welcome <span class="bold"><asp:LoginName ID="HeadLoginName" runat="server" /> <asp:Label ID="userNameLabel" runat="server" Text="Label"></asp:Label></span>! [ <asp:LoginStatus ID="HeadLoginStatus" runat="server" LogoutAction="Redirect" LogoutText="Log Out" LogoutPageUrl="~/Logout.aspx"/> ] <%--Welcome: <span class="bold"><asp:LoginName ID="MasterLoginName" runat="server" /> </span>!--%> </LoggedInTemplate> <AnonymousTemplate> Welcome: Guest [ <a href="~/Account/Login.aspx" ID="HeadLoginStatus" runat="server">Log In</a> ] </AnonymousTemplate> </asp:LoginView> <%--&nbsp;&nbsp; [&nbsp;<asp:LoginStatus ID="MasterLoginStatus" runat="server" LogoutAction="Redirect" LogoutPageUrl="~/Logout.aspx" />&nbsp;]&nbsp;&nbsp;--%> </div> Since VS2010 launches with a default login page in the accounts folder, i didnt think it necessary to create a separate log in page, so i just used the same log in page. please find the code for the login control below: <asp:Login ID="LoginUser" runat="server" EnableViewState="false" RenderOuterTable="false"> <LayoutTemplate> <span class="failureNotification"> <asp:Literal ID="FailureText" runat="server"></asp:Literal> </span> <asp:ValidationSummary ID="LoginUserValidationSummary" runat="server" CssClass="failureNotification" ValidationGroup="LoginUserValidationGroup"/> <div class="accountInfo"> <fieldset class="login"> <legend style="text-align:left; font-size:1.2em; color:White;">Account Information</legend> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="UserNameLabel" runat="server" AssociatedControlID="UserName">User ID:</asp:Label> <asp:TextBox ID="UserName" runat="server" CssClass="textEntry"></asp:TextBox> <asp:RequiredFieldValidator ID="UserNameRequired" runat="server" ControlToValidate="UserName" CssClass="failureNotification" ErrorMessage="User ID is required." ToolTip="User ID field is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:Label ID="PasswordLabel" runat="server" AssociatedControlID="Password">Password:</asp:Label> <asp:TextBox ID="Password" runat="server" CssClass="passwordEntry" TextMode="Password"></asp:TextBox> <asp:RequiredFieldValidator ID="PasswordRequired" runat="server" ControlToValidate="Password" CssClass="failureNotification" ErrorMessage="Password is required." ToolTip="Password is required." ValidationGroup="LoginUserValidationGroup">*</asp:RequiredFieldValidator> </p> <p style="text-align:left; font-size:1.2em; color:White;"> <asp:CheckBox ID="RememberMe" runat="server"/> <asp:Label ID="RememberMeLabel" runat="server" AssociatedControlID="RememberMe" CssClass="inline">Keep me logged in</asp:Label> </p> </fieldset> <p class="submitButton"> <asp:Button ID="LoginButton" runat="server" CommandName="Login" Text="Log In" ValidationGroup="LoginUserValidationGroup" onclick="LoginButton_Click"/> </p> </div> </LayoutTemplate> </asp:Login> I then wrote my own code for authentication since i had my own database. the following displays the code in the login buttons click event.: public partial class Login : System.Web.UI.Page { //create string objects string userIDStr, pwrdStr; protected void LoginButton_Click(object sender, EventArgs e) { //assign textbox items to string objects userIDStr = LoginUser.UserName.ToString(); pwrdStr = LoginUser.Password.ToString(); //SQL connection string string strConn; strConn = WebConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ConnectionString; SqlConnection Conn = new SqlConnection(strConn); //SqlDataSource CSMDataSource = new SqlDataSource(); // CSMDataSource.ConnectionString = ConfigurationManager.ConnectionStrings["CMSSQL3ConnectionString"].ToString(); //SQL select statement for comparison string sqlUserData; sqlUserData = "SELECT StaffID, StaffPassword, StaffFName, StaffLName, StaffType FROM Staffs"; sqlUserData += " WHERE (StaffID ='" + userIDStr + "')"; sqlUserData += " AND (StaffPassword ='" + pwrdStr + "')"; SqlCommand com = new SqlCommand(sqlUserData, Conn); SqlDataReader rdr; string usrdesc; string lname; string fname; string staffname; try { //string CurrentData; //CurrentData = (string)com.ExecuteScalar(); Conn.Open(); rdr = com.ExecuteReader(); rdr.Read(); usrdesc = (string)rdr["StaffType"]; fname = (string)rdr["StaffFName"]; lname = (string)rdr["StaffLName"]; staffname = lname.ToString() + " " + fname.ToString(); LoginUser.UserName = staffname.ToString(); rdr.Close(); if (usrdesc.ToLower() == "administrator") { Response.Redirect("~/CaseAdmin.aspx", false); } else if (usrdesc.ToLower() == "manager") { Response.Redirect("~/CaseManager.aspx", false); } else if (usrdesc.ToLower() == "investigator") { Response.Redirect("~/Investigator.aspx", false); } else { Response.Redirect("~/Default.aspx", false); } } catch(Exception ex) { string script = "<script>alert('" + ex.Message + "');</script>"; } finally { Conn.Close(); } } My authentication works perfectly and the page gets redirected to the designated destination. However, the login view does not display the users name. i actually cant figure out how to pass the users name that i had picked from the database to the login name control to be displayed. taking a close look i also noticed the logout text that should be displayed after successful log in does not show. that leaves me wondering if the loggedin template control on the masterpage even fires at all or its still the anonymous template control that keeps displaying.? How do i get this to work as expected? Please help....

    Read the article

  • Azure Diagnostics wrt Custom Logs and honoring scheduledTransferPeriod

    - by kjsteuer
    I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx . One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment. My diagnosics.wadcfg <?xml version="1.0" encoding="utf-8"?> <DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"> <DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" /> <Directories scheduledTransferPeriod="PT1M"> <IISLogs container="wad-iis-logfiles" /> <CrashDumps container="wad-crash-dumps" /> </Directories> <Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" /> </DiagnosticMonitorConfiguration> I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod? <system.diagnostics> <!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx By default autoflush is false. By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx --> <trace> <listeners> <add name="TableTraceListener" type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> I have modified the custom trace listener to the following: namespace Pos.Services.Implementation { class TableTraceListener : TraceListener { #region Fields //connection string for azure storage readonly string _connectionString; //Custom sql storage table for logs. //TODO put in config readonly string _diagnosticsTable; [ThreadStatic] static StringBuilder _messageBuffer; readonly object _initializationSection = new object(); bool _isInitialized; CloudTableClient _tableStorage; readonly object _traceLogAccess = new object(); readonly List<LogEntry> _traceLog = new List<LogEntry>(); #endregion #region Constructors public TableTraceListener() : base("TableTraceListener") { _connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection"); _diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName"); } #endregion #region Methods /// <summary> /// Flushes the entries to the storage table /// </summary> public override void Flush() { if (!_isInitialized) { lock (_initializationSection) { if (!_isInitialized) { Initialize(); } } } var context = _tableStorage.GetTableServiceContext(); context.MergeOption = MergeOption.AppendOnly; lock (_traceLogAccess) { _traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry)); _traceLog.Clear(); } if (context.Entities.Count > 0) { context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null); } } /// <summary> /// Creates the storage table object. This class does not need to be locked because the caller is locked. /// </summary> private void Initialize() { var account = CloudStorageAccount.Parse(_connectionString); _tableStorage = account.CreateCloudTableClient(); _tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists(); _isInitialized = true; } public override bool IsThreadSafe { get { return true; } } #region Trace and Write Methods /// <summary> /// Writes the message to a string buffer /// </summary> /// <param name="message">the Message</param> public override void Write(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.Append(message); } /// <summary> /// Writes the message with a line breaker to a string buffer /// </summary> /// <param name="message"></param> public override void WriteLine(string message) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); _messageBuffer.AppendLine(message); } /// <summary> /// Appends the trace information and message /// </summary> /// <param name="eventCache">the Event Cache</param> /// <param name="source">the Source</param> /// <param name="eventType">the Event Type</param> /// <param name="id">the Id</param> /// <param name="message">the Message</param> public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message) { base.TraceEvent(eventCache, source, eventType, id, message); AppendEntry(id, eventType, eventCache); } /// <summary> /// Adds the trace information to a collection of LogEntry objects /// </summary> /// <param name="id">the Id</param> /// <param name="eventType">the Event Type</param> /// <param name="eventCache">the EventCache</param> private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache) { if (_messageBuffer == null) _messageBuffer = new StringBuilder(); var message = _messageBuffer.ToString(); _messageBuffer.Length = 0; if (message.EndsWith(Environment.NewLine)) message = message.Substring(0, message.Length - Environment.NewLine.Length); if (message.Length == 0) return; var entry = new LogEntry() { PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30), RowKey = string.Format("{0:D19}", eventCache.Timestamp), EventTickCount = eventCache.Timestamp, Level = (int)eventType, EventId = id, Pid = eventCache.ProcessId, Tid = eventCache.ThreadId, Message = message }; lock (_traceLogAccess) _traceLog.Add(entry); } #endregion #endregion } }

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

< Previous Page | 808 809 810 811 812 813 814  | Next Page >