Search Results

Search found 7081 results on 284 pages for 'idle processing'.

Page 257/284 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Finding it Hard to Deliver Right Customer Experience: Think BPM!

    - by Ajay Khanna
    Our relationship with our customers is not a just a single interaction and we should not treat it like one. A customer’s relationship with a vendor is like a journey which starts way before customer makes a purchase and lasts long after that. The journey may start with customer researching a product that may lead to the eventual purchase and may continue with support or service needs for the product. A typical customer journey can be represented as shown below: As you may notice, customers tend to use multiple channels to interact with a company throughout their journey.  They also expect that they should get consistent experience, no matter what interaction channel they may choose. Customers do not like to repeat the information they have already provided and expect companies to remember their preferences, and offer them relevant products and services. If the company fails to meet this expectation, customers not only will abandon the purchase and go to the competitor but may also influence others’ purchase decision. Gone are the days when word of mouth was the only medium, and the customer could influence “Six” others. This is the age of social media and customer’s good or bad experience, especially bad get highly amplified and may influence hundreds of others. Challenges that face B2C companies today include: Delivering consistent experience: The reason that delivering consistent experience is challenging is due to fragmented data, disjointed systems and siloed multichannel interactions. Customers tend to get different service quality if they use web vs. phone vs. store. They get different responses from different service agents or get inconsistent answers if they call sales vs. service group in the company. Such inconsistent experiences result in lower customer satisfaction or NPS (net promoter score) numbers. Increasing Revenue: To stay competitive companies frequently introduce new products and services. Delay in launching such offerings has a significant impact on revenue realization. In addition to new product revenue, there are multiple opportunities to up-sell and cross-sell that impact bottom line. If companies are not able to identify such opportunities, bring a product to market quickly, or not offer the right product to the right customer at the right time, significant loss of revenue may occur. Ensuring Compliance: Companies must be compliant to ever changing regulations, these could be about Know Your Customer (KYC), Export/Import regulations, or taxation policies. In addition to government agencies, companies also need to comply with the SLA that they have committed to their customers. Lapse in meeting any of these requirements may lead to serious fines, penalties and loss in business. Companies have to make sure that they are in compliance will all such regulations and SLA commitments, at any given time. With the advent of social networks and mobile technology, companies not only need to focus on process efficiency but also on customer engagement. Improving engagement means delivering the customer experience as the customer is expecting and interacting with the customer at right time using right channel. Customers expect to be able to contact you via any channel of their choice (web, email, chat, mobile, social media), purchase via any viable channel (web, phone, store, mobile). Customers expect companies to understand their particular needs and remember their preferences on repeated visits. To deliver such an integrated, consistent, and contextual experience, power of BPM in must. Your company may be organized in departments like Marketing, Sales, Service. You may hold prospect data in SFA, order information in ERP, customer issues in CRM. However, the experience delivered to the customer must not be constrained by your system legacy. BPM helps in designing the right experience for the right customer and integrates all the underlining channels, systems, applications to make sure right information will be delivered to the right knowledge worker or to the customer every single time.     Orchestrating information across all systems (MDM, CRM, ERP), departments (commerce, merchandising, marketing service) and channels (Email, phone, web, social)  is the key, and that’s what BPM delivers. In addition to orchestrating systems and channels for consistency, BPM also provides an ability for analysis and decision management. By using data from historical transactions, social media and from other systems, users can determine the customer preferences, customer value, and churn propensity. This information, in the context, is then used while making a decision at a process step. Working with real-time decision management system can also suggest right up-sell or cross-sell offers, discounts or next-best-action steps for a particular customer. Timely action on customer issues or request is also a key tenet of a good customer experience. BPM’s complex event processing capabilities help companies to take proactive actions before issues get escalated. BPM system can be designed to listen to a certain event patters then deduce from those customer situations (credit card stolen, baggage lost, change of address) and do a triage before situation goes out of control. If such a situation arises you can send alerts to right people or immediately invoke corrective actions. Last but not least one of BPM’s key values is to drive continuous improvement. Learning about customers past experiences, interactions and social conversations, provide valuable insight. Such insight can be used to improve products, customer facing processes, and customer experience. You may take these insights as an input to design better more efficient and customer friendly sales, contact center or self-service processes. If customer experience is important for your business, make sure you have incorporated BPM as a part of your strategy to design, orchestrate and improve your customer facing processes.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • Trouble with AABB collision response and physics

    - by WCM
    I have been racking my brain trying to figure out a problem I am having with physics and basic AABB collision response. I am fairly close as the physics are mostly right. Gravity feels good and movement is solid. The issue I am running into is that when I land on the test block in my project, I can jump off of it most of the time. If I repeatedly jump in place, I will eventually get stuck one or two pixels below the surface of the test block. If I try to jump, I can become free of the other block, but it will happen again a few jumps later. I feel like I am missing something really obvious with this. I have two functions that support the detection and function to return a vector for the overlap of the two rectangle bounding boxes. I have a single update method that is processing the physics and collision for the entity. I feel like I am missing something very simple, like an ordering of the physics vs. collision response handling. Any thoughts or help can be appreciated. I apologize for the format of the code, tis prototype code mostly. The collision detection function: public static bool Collides(Rectangle source, Rectangle target) { if (source.Right < target.Left || source.Bottom < target.Top || source.Left > target.Right || source.Top > target.Bottom) { return false; } return true; } The overlap function: public static Vector2 GetMinimumTranslation(Rectangle source, Rectangle target) { Vector2 mtd = new Vector2(); Vector2 amin = source.Min(); Vector2 amax = source.Max(); Vector2 bmin = target.Min(); Vector2 bmax = target.Max(); float left = (bmin.X - amax.X); float right = (bmax.X - amin.X); float top = (bmin.Y - amax.Y); float bottom = (bmax.Y - amin.Y); if (left > 0 || right < 0) return Vector2.Zero; if (top > 0 || bottom < 0) return Vector2.Zero; if (Math.Abs(left) < right) mtd.X = left; else mtd.X = right; if (Math.Abs(top) < bottom) mtd.Y = top; else mtd.Y = bottom; // 0 the axis with the largest mtd value. if (Math.Abs(mtd.X) < Math.Abs(mtd.Y)) mtd.Y = 0; else mtd.X = 0; return mtd; } The update routine (gravity = 0.001f, jumpHeight = 0.35f, moveAmount = 0.15f): public void Update(GameTime gameTime) { Acceleration.Y = gravity; Position += new Vector2((float)(movement * moveAmount * gameTime.ElapsedGameTime.TotalMilliseconds), (float)(Velocity.Y * gameTime.ElapsedGameTime.TotalMilliseconds)); Velocity.Y += Acceleration.Y; Vector2 previousPosition = new Vector2((int)Position.X, (int)Position.Y); KeyboardState keyboard = Keyboard.GetState(); movement = 0; if (keyboard.IsKeyDown(Keys.Left)) { movement -= 1; } if (keyboard.IsKeyDown(Keys.Right)) { movement += 1; } if (Position.Y + 16 > GameClass.Instance.GraphicsDevice.Viewport.Height) { Velocity.Y = 0; Position = new Vector2(Position.X, GameClass.Instance.GraphicsDevice.Viewport.Height - 16); IsOnSurface = true; } if (Collision.Collides(BoundingBox, GameClass.Instance.block.BoundingBox)) { Vector2 mtd = Collision.GetMinimumTranslation(BoundingBox, GameClass.Instance.block.BoundingBox); Position += mtd; Velocity.Y = 0; IsOnSurface = true; } if (keyboard.IsKeyDown(Keys.Space) && !previousKeyboard.IsKeyDown(Keys.Space)) { if (IsOnSurface) { Velocity.Y = -jumpHeight; IsOnSurface = false; } } previousKeyboard = keyboard; } This is also a full download to the project. https://www.box.com/s/3rkdtbso3xgfgc2asawy P.S. I know that I could do this with the XNA Platformer Starter Kit algo, but it has some deep flaws that I am going to try to live without. I'd rather go the route of collision response via an overlay function. Thanks for any and all insight!

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • [C#][Design] Appropriate programming design questions.

    - by Edward
    I have a few questions on good programming design. I'm going to first describe the project I'm building so you are better equipped to help me out. I am coding a Remote Assistance Tool similar to TeamViewer, Microsoft Remote Desktop, CrossLoop. It will incorporate concepts like UDP networking (using Lidgren networking library), NAT traversal (since many computers are invisible behind routers nowadays), Mirror Drivers (using DFMirage's Mirror Driver (http://www.demoforge.com/dfmirage.htm) for realtime screen grabbing on the remote computer). That being said, this program has a concept of being a client-server architecture, but I made only one program with both the functionality of client and server. That way, when the user runs my program, they can switch between giving assistance and receiving assistance without having to download a separate client or server module. I have a Windows Form that allows the user to choose between giving assistance and receiving assistance. I have another Windows Form for a file explorer module. I have another Windows Form for a chat module. I have another Windows Form form for a registry editor module. I have another Windows Form for the live control module. So I've got a Form for each module, which raises the first question: 1. Should I process module-specific commands inside the code of the respective Windows Form? Meaning, let's say I get a command with some data that enumerates the remote user's files for a specific directory. Obviously, I would have to update this on the File Explorer Windows Form and add the entries to the ListView. Should I be processing this code inside the Windows Form though? Or should I be handling this in another class (although I have to eventually pass the data to the Form to draw, of course). Or is it like a hybrid in which I process most of the data in another class and pass the final result to the Form to draw? So I've got like 5-6 forms, one for each module. The user starts up my program, enters the remote machine's ID (not IP, ID, because we are registering with an intermediary server to enable NAT traversal), their password, and connects. Now let's suppose the connection is successful. Then the user is presented with a form with all the different modules. So he can open up a File Explorer, or he can mess with the Registry Editor, or he can choose to Chat with his buddy. So now the program is sort of idle, just waiting for the user to do something. If the user opens up Live Control, then the program will be spending most of it's time receiving packets from the remote machine and drawing them to the form to provide a 'live' view. 2. Second design question. A spin off question #1. How would I pass module-specific commands to their respective Windows Forms? What I mean is, I have a class like "NetworkHandler.cs" that checks for messages from the remote machine. NetworkHandler.cs is a static class globally accessible. So let's say I get a command that enumerates the remote user's files for a specific directory. How would I "give" that command to the File Explorer Form. I was thinking of making an OnCommandReceivedEvent inside NetworkHandler, and having each form register to that event. When the NetworkHandler received a command, it would raise the event, all forms would check it to see if it was relevant, and the appropriate form would take action. Is this an appropriate/the best solution available? 3. The networking library I'm using, Lidgren, provides two options for checking networking messages. One can either poll ReadMessage() to return null or a message, or one can use an AutoResetEvent OnMessageReceived (I'm guessing this is like an event). Which one is more appropriate?

    Read the article

  • reloading page while an ajax request in progress gives empty response and status as zero

    - by Jayapal Chandran
    Hi, Browser is firefox 3.0.10 I am requesting a page using ajax. The response is in progress may be in readyState less than 4. In this mean time i am trying to reload the page. What happens is the request ends giving an empty response. I used alert to find what string has been given as response text. I assume that by this time the ready state 4 is reached. why it is empty string. when i alert the xmlhttpobject.status it displayed 0. when i alert the xmlhttpobject.statusText an exception occurs stating that NOT AVAILABLE. when i read in the document http://www.devx.com/webdev/Article/33024/0/page/2 it said for 3 and 4 status and statusText are available but when i tested only status is available but not satausText Here is a sample code. consider that i have requested a page and my callback function is as follows function cb(rt) { if(rt.readyState==4) { alert(rt.status); alert(rt.statusText); // which throws an exception } } and my server side script is as follows sleep(30); //flushing little drop down code besides these i noticed the following... assume again i am requesting the above script using ajax. now there will be an idle time till 30 seconds is over before that 30 seconds i press refresh. i got xmlhttpobject.status as 0 but still the browser did not reload the page untill that 30 seconds. WHY? so what is happening when i refresh a page before an ajax request is complete is the status value is set to zero and the ready state is set to 4 but the page still waits for the response from the server to end... what is happening... THE REASON FOR ME TO FACE SOME THING LIKE THIS IS AS FOLLOWS. when ever i do an ajax request ... if the process succeeded like inserting some thing or deleting something i popup a div stating that updated successfully and i will reload the page. but if there is any error then i do not reload the page instead i just alert that unable to process this request. what happens if the user reloads the page before any of this request is complete is i get an empty response which in my calculation is there is a server error. so i was debugging the ajax response to filter out that the connection has been interrupted because the user had pressed reload. so in this time i don't want to display unable to process this request when the user reloads the page before the request has been complete. oh... a long story. IT IS A LONG DESCRIPTION SO THAT I CAN MAKE EXPERTS UNDERSTAND MY DOUBT. so what i want form the above. any type of answer would clear my mind. or i would like to say all type of answers. EDIT: 19 dec. If i did not get any correct answer then i would delete this question and will rewrite with examples. else i will accept after experimenting.

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    I have been working with the JCP in various roles since EJB 3/Java EE 5 (much of it on my own time), eventually culminating in my decision to accept my current role at Oracle (despite it's inevitable set of unique challenges, a role I find by and large positive and fulfilling). During these years, it has always been clear to me that pretty much everyone in the JCP genuinely cares about openness, feedback and developer participation. Perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. When we started the survey, I don't think anyone was certain what the level of participation from developers would really be. I also think everyone was pleasantly surprised that a large number of developers (around 1100) took the time out to vote on these very important issues that could impact their own professional life. And it wasn't just a matter of the quantity of responses. I was particularly impressed with the quality of the comments made through the survey (some of which I'll try to do justice to below). With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. As an aside, you may be happy to know that we are working hard behind the scenes to try to put together a similar survey to help kick off the agenda for Java EE 8 (although this is by no means certain). I'll break things down by the questions asked in the survey, the responses and the resulting change in the specification. APIs to Add to Java EE 7 Full/Web Profile The first question in the survey asked which of four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. Developers by and large wanted all the new APIs added to the full platform. The comments expressed particularly strong support for WebSocket and JCache. Others expressed dissatisfaction over the lack of a JSON binding (as opposed to JSON processing) API. WebSocket, JSON-P and JBatch are now part of Java EE 7. In addition, the long-awaited Java EE Concurrency Utilities API was also included in the Full Profile. Unfortunately, JCache was not finalized in time for Java EE 7 and the decision was made not to hold up the Java EE release any longer. JCache continues to move forward strongly and will very likely be included in Java EE 8 (it will be available much sooner than Java EE 8 to boot). An emergent standard for JSON-B is also a strong possibility for Java EE 8. When it came to the Web Profile, developers were supportive of adding WebSocket and JSON-P, but not JBatch and JCache. Both WebSocket and JSON-P are now part of the Web Profile, now also including the already popular JAX-RS API. Enabling CDI by Default The second question asked whether CDI should be enabled in Java EE by default. The overwhelming majority of developers supported the default enablement of CDI. In addition, developers expressed a desire for better CDI/Java EE alignment (with regards to EJB and JSF in particular). Some developers expressed legitimate concerns over the performance implications of enabling CDI globally as well as the potential conflict with other JSR 330 implementations like Spring and Guice. CDI is enabled by default in Java EE 7. Respecting the legitimate concerns, CDI 1.1 was very careful to add additional controls around component scanning. While a lot of work was done in Java EE 6 and Java EE 7 around CDI alignment, further alignment is under serious consideration for Java EE 8. Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations (e.g. @BatchContext). A majority of developers wanted consistent usage of @Inject. The comments again reflected a strong desire for CDI/Java EE alignment. A lot of emphasis in Java EE 7 was put into using @Inject consistently. For example, the JBatch specification is focused on using @Inject wherever possible. JAX-RS remains an exception with it's existing custom injection annotations. However, the JAX-RS specification leads understand the importance of eventual convergence, hopefully in Java EE 8. Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A solid majority of developers supported the idea of making @Stereotype more universal in Java EE. The comments maintained the general theme of strong support for CDI/Java EE alignment Unfortunately, there was not enough time and resources in Java EE 7 to implement this fairly pervasive feature. However, it remains a serious consideration for Java EE 8. Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE. Developers strongly supported the concept. Along with injection, interceptors are now supported across all Java EE 7 components including Servlets, Filters, Listeners, JAX-WS endpoints, JAX-RS resources, WebSocket endpoints and so on. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Participating in these sorts of surveys is of course just one way of contributing to Java EE. Another great way to stay involved is the Adopt-A-JSR Program. A large number of developers are already participating through their local JUGs. You could of course become a Java EE JSR expert group member or observer. You should stay tuned to The Aquarium for the progress of Java EE 8 JSRs if that's something you want to look into...

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Java Port Socket Programming Error

    - by atrus-darkstone
    Hi- I have been working on a java client-server program using port sockets. The goal of this program is for the client to take a screenshot of the machine it is running on, break the RGB info of this image down into integers and arrays, then send this info over to the server, where it is reconstructed into a new image file. However, when I run the program I am experiencing the following two bugs: The first number recieved by the server, no matter what its value is according to the client, is always 49. The client only sends(or the server only receives?) the first value, then the program hangs forever. Any ideas as to why this is happening, and what I can do to fix it? The code for both client and server is below. Thanks! CLIENT: import java.awt.*; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.image.BufferedImage; import java.io.*; import java.net.Socket; import java.text.SimpleDateFormat; import java.util.*; import javax.swing.*; import javax.swing.Timer; public class ViewerClient implements ActionListener{ private Socket vSocket; private BufferedReader in; private PrintWriter out; private Robot robot; // static BufferedReader orders = null; public ViewerClient() throws Exception{ vSocket = null; in = null; out = null; robot = null; } public void setVSocket(Socket vs) { vSocket = vs; } public void setInput(BufferedReader i) { in = i; } public void setOutput(PrintWriter o) { out = o; } public void setRobot(Robot r) { robot = r; } /*************************************************/ public Socket getVSocket() { return vSocket; } public BufferedReader getInput() { return in; } public PrintWriter getOutput() { return out; } public Robot getRobot() { return robot; } public void run() throws Exception{ int speed = 2500; int pause = 5000; Timer timer = new Timer(speed, this); timer.setInitialDelay(pause); // System.out.println("CLIENT: Set up timer."); try { setVSocket(new Socket("Alex-PC", 4444)); setInput(new BufferedReader(new InputStreamReader(getVSocket().getInputStream()))); setOutput(new PrintWriter(getVSocket().getOutputStream(), true)); setRobot(new Robot()); // System.out.println("CLIENT: Established connection and IO ports."); // timer.start(); captureScreen(nameImage()); }catch(Exception e) { System.err.println(e); } } public void captureScreen(String fileName) throws Exception{ Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize(); Rectangle screenRectangle = new Rectangle(screenSize); BufferedImage image = getRobot().createScreenCapture(screenRectangle); int width = image.getWidth(); int height = image.getHeight(); int[] pixelData = new int[(width * height)]; image.getRGB(0,0, width, height, pixelData, width, height); byte[] imageData = new byte[(width * height)]; String fromServer = null; if((fromServer = getInput().readLine()).equals("READY")) { sendWidth(width); sendHeight(height); sendArrayLength((width * height)); sendImageInfo(fileName); sendImageData(imageData); } /* System.out.println(imageData.length); String fromServer = null; for(int i = 0; i < pixelData.length; i++) { imageData[i] = ((byte)pixelData[i]); } System.out.println("CLIENT: Pixel data successfully converted to byte data."); System.out.println("CLIENT: Waiting for ready message..."); if((fromServer = getInput().readLine()).equals("READY")) { System.out.println("CLIENT: Ready message recieved."); getOutput().println("SENDING ARRAY LENGTH..."); System.out.println("CLIENT: Sending array length..."); System.out.println("CLIENT: " + imageData.length); getOutput().println(imageData.length); System.out.println("CLIENT: Array length sent."); getOutput().println("SENDING IMAGE..."); System.out.println("CLIENT: Sending image data..."); for(int i = 0; i < imageData.length; i++) { getOutput().println(imageData[i]); } System.out.println("CLIENT: Image data sent."); getOutput().println("SENDING IMAGE WIDTH..."); System.out.println("CLIENT: Sending image width..."); getOutput().println(width); System.out.println("CLIENT: Image width sent."); getOutput().println("SENDING IMAGE HEIGHT..."); System.out.println("CLIENT: Sending image height..."); getOutput().println(height); System.out.println("CLIENT: Image height sent..."); getOutput().println("SENDING IMAGE INFO..."); System.out.println("CLIENT: Sending image info..."); getOutput().println(fileName); System.out.println("CLIENT: Image info sent."); getOutput().println("FINISHED."); System.out.println("Image data sent successfully."); } if((fromServer = getInput().readLine()).equals("CLOSE DOWN")) { getOutput().close(); getInput().close(); getVSocket().close(); } */ } public String nameImage() throws Exception { String dateFormat = "yyyy-MM-dd HH-mm-ss"; Calendar cal = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat(dateFormat); String fileName = sdf.format(cal.getTime()); return fileName; } public void sendArrayLength(int length) throws Exception { getOutput().println("SENDING ARRAY LENGTH..."); getOutput().println(length); } public void sendWidth(int width) throws Exception { getOutput().println("SENDING IMAGE WIDTH..."); getOutput().println(width); } public void sendHeight(int height) throws Exception { getOutput().println("SENDING IMAGE HEIGHT..."); getOutput().println(height); } public void sendImageData(byte[] imageData) throws Exception { getOutput().println("SENDING IMAGE..."); for(int i = 0; i < imageData.length; i++) { getOutput().println(imageData[i]); } } public void sendImageInfo(String info) throws Exception { getOutput().println("SENDING IMAGE INFO..."); getOutput().println(info); } public void actionPerformed(ActionEvent a){ String message = null; try { if((message = getInput().readLine()).equals("PROCESSING...")) { if((message = getInput().readLine()).equals("IMAGE RECIEVED SUCCESSFULLY.")) { captureScreen(nameImage()); } } }catch(Exception e) { JOptionPane.showMessageDialog(null, "Problem: " + e); } } } SERVER: import java.awt.image.BufferedImage; import java.io.*; import java.net.*; import javax.imageio.ImageIO; /*IMPORTANT TODO: * 1. CLOSE ALL STREAMS AND SOCKETS WITHIN CLIENT AND SERVER! * 2. PLACE MAIN EXEC CODE IN A TIMED WHILE LOOP TO SEND FILE EVERY X SECONDS * */ public class ViewerServer { private ServerSocket vServer; private Socket vClient; private PrintWriter out; private BufferedReader in; private byte[] imageData; private int width; private int height; private String imageInfo; private int[] rgbData; private boolean active; public ViewerServer() throws Exception{ vServer = null; vClient = null; out = null; in = null; imageData = null; width = 0; height = 0; imageInfo = null; rgbData = null; active = true; } public void setVServer(ServerSocket vs) { vServer = vs; } public void setVClient(Socket vc) { vClient = vc; } public void setOutput(PrintWriter o) { out = o; } public void setInput(BufferedReader i) { in = i; } public void setImageData(byte[] imDat) { imageData = imDat; } public void setWidth(int w) { width = w; } public void setHeight(int h) { height = h; } public void setImageInfo(String im) { imageInfo = im; } public void setRGBData(int[] rd) { rgbData = rd; } public void setActive(boolean a) { active = a; } /***********************************************/ public ServerSocket getVServer() { return vServer; } public Socket getVClient() { return vClient; } public PrintWriter getOutput() { return out; } public BufferedReader getInput() { return in; } public byte[] getImageData() { return imageData; } public int getWidth() { return width; } public int getHeight() { return height; } public String getImageInfo() { return imageInfo; } public int[] getRGBData() { return rgbData; } public boolean getActive() { return active; } public void run() throws Exception{ connect(); setActive(true); while(getActive()) { recieve(); } close(); } public void recieve() throws Exception{ String clientStatus = null; int clientData = 0; // System.out.println("SERVER: Sending ready message..."); getOutput().println("READY"); // System.out.println("SERVER: Ready message sent."); if((clientStatus = getInput().readLine()).equals("SENDING IMAGE WIDTH...")) { setWidth(getInput().read()); System.out.println("Width: " + getWidth()); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE HEIGHT...")) { setHeight(getInput().read()); System.out.println("Height: " + getHeight()); } if((clientStatus = getInput().readLine()).equals("SENDING ARRAY LENGTH...")) { clientData = getInput().read(); setImageData(new byte[clientData]); System.out.println("Array length: " + clientData); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE INFO...")) { setImageInfo(getInput().readLine()); System.out.println("Image Info: " + getImageInfo()); } if((clientStatus = getInput().readLine()).equals("SENDING IMAGE...")) { for(int i = 0; i < getImageData().length; i++) { getImageData()[i] = ((byte)getInput().read()); } } if((clientStatus = getInput().readLine()).equals("FINISHED.")) { getOutput().println("PROCESSING..."); setRGBData(new int[getImageData().length]); for(int i = 0; i < getRGBData().length; i++) { getRGBData()[i] = getImageData()[i]; } BufferedImage image = null; image.setRGB(0, 0, getWidth(), getHeight(), getRGBData(), getWidth(), getHeight()); ImageIO.write(image, "png", new File(imageInfo + ".png")); //create an image file out of the screenshot getOutput().println("IMAGE RECIEVED SUCCESSFULLY."); } } public void connect() throws Exception { setVServer(new ServerSocket(4444)); //establish server connection // System.out.println("SERVER: Connection established."); setVClient(getVServer().accept()); //accept client connection request // System.out.println("SERVER: Accepted connection request."); setOutput(new PrintWriter(vClient.getOutputStream(), true)); //set up an output channel setInput(new BufferedReader(new InputStreamReader(vClient.getInputStream()))); //set up an input channel // System.out.println("SERVER: Created IO ports."); } public void close() throws Exception { getOutput().close(); getInput().close(); getVClient().close(); getVServer().close(); } }

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Collaborative Whiteboard using WebSocket in GlassFish 4 - Text/JSON and Binary/ArrayBuffer Data Transfer (TOTD #189)

    - by arungupta
    This blog has published a few blogs on using JSR 356 Reference Implementation (Tyrus) as its integrated in GlassFish 4 promoted builds. TOTD #183: Getting Started with WebSocket in GlassFish TOTD #184: Logging WebSocket Frames using Chrome Developer Tools, Net-internals and Wireshark TOTD #185: Processing Text and Binary (Blob, ArrayBuffer, ArrayBufferView) Payload in WebSocket TOTD #186: Custom Text and Binary Payloads using WebSocket One of the typical usecase for WebSocket is online collaborative games. This Tip Of The Day (TOTD) explains a sample that can be used to build such games easily. The application is a collaborative whiteboard where different shapes can be drawn in multiple colors. The shapes drawn on one browser are automatically drawn on all other peer browsers that are connected to the same endpoint. The shape, color, and coordinates of the image are transfered using a JSON structure. A browser may opt-out of sharing the figures. Alternatively any browser can send a snapshot of their existing whiteboard to all other browsers. Take a look at this video to understand how the application work and the underlying code. The complete sample code can be downloaded here. The code behind the application is also explained below. The web page (index.jsp) has a HTML5 Canvas as shown: <canvas id="myCanvas" width="150" height="150" style="border:1px solid #000000;"></canvas> And some radio buttons to choose the color and shape. By default, the shape, color, and coordinates of any figure drawn on the canvas are put in a JSON structure and sent as a message to the WebSocket endpoint. The JSON structure looks like: { "shape": "square", "color": "#FF0000", "coords": { "x": 31.59999942779541, "y": 49.91999053955078 }} The endpoint definition looks like: @WebSocketEndpoint(value = "websocket",encoders = {FigureDecoderEncoder.class},decoders = {FigureDecoderEncoder.class})public class Whiteboard { As you can see, the endpoint has decoder and encoder registered that decodes JSON to a Figure (a POJO class) and vice versa respectively. The decode method looks like: public Figure decode(String string) throws DecodeException { try { JSONObject jsonObject = new JSONObject(string); return new Figure(jsonObject); } catch (JSONException ex) { throw new DecodeException("Error parsing JSON", ex.getMessage(), ex.fillInStackTrace()); }} And the encode method looks like: public String encode(Figure figure) throws EncodeException { return figure.getJson().toString();} FigureDecoderEncoder implements both decoder and encoder functionality but thats purely for convenience. But the recommended design pattern is to keep them in separate classes. In certain cases, you may even need only one of them. On the client-side, the Canvas is initialized as: var canvas = document.getElementById("myCanvas");var context = canvas.getContext("2d");canvas.addEventListener("click", defineImage, false); The defineImage method constructs the JSON structure as shown above and sends it to the endpoint using websocket.send(). An instant snapshot of the canvas is sent using binary transfer with WebSocket. The WebSocket is initialized as: var wsUri = "ws://localhost:8080/whiteboard/websocket";var websocket = new WebSocket(wsUri);websocket.binaryType = "arraybuffer"; The important part is to set the binaryType property of WebSocket to arraybuffer. This ensures that any binary transfers using WebSocket are done using ArrayBuffer as the default type seem to be blob. The actual binary data transfer is done using the following: var image = context.getImageData(0, 0, canvas.width, canvas.height);var buffer = new ArrayBuffer(image.data.length);var bytes = new Uint8Array(buffer);for (var i=0; i<bytes.length; i++) { bytes[i] = image.data[i];}websocket.send(bytes); This comprehensive sample shows the following features of JSR 356 API: Annotation-driven endpoints Send/receive text and binary payload in WebSocket Encoders/decoders for custom text payload In addition, it also shows how images can be captured and drawn using HTML5 Canvas in a JSP. How could this be turned in to an online game ? Imagine drawing a Tic-tac-toe board on the canvas with two players playing and others watching. Then you can build access rights and controls within the application itself. Instead of sending a snapshot of the canvas on demand, a new peer joining the game could be automatically transferred the current state as well. Do you want to build this game ? I built a similar game a few years ago. Do somebody want to rewrite the game using WebSocket APIs ? :-) Many thanks to Jitu and Akshay for helping through the WebSocket internals! Here are some references for you: JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

  • Understanding implementation of glu.PickMatrix()

    - by stoney78us
    I am working on an OpenGL project which requires object selection feature. I use OpenTK framework to do this; however OpenTK doesn't support glu.PickMatrix() method to define the picking region. I ended up googling its implementation and here is what i got: void GluPickMatrix(double x, double y, double deltax, double deltay, int[] viewport) { if (deltax <= 0 || deltay <= 0) { return; } GL.Translate((viewport[2] - 2 * (x - viewport[0])) / deltax, (viewport[3] - 2 * (y - viewport[1])) / deltay, 0); GL.Scale(viewport[2] / deltax, viewport[3] / deltay, 1.0); } I totally fail to understand this piece of code. Moreover, this doesn't work with my following code sample: //selectbuffer private int[] _selectBuffer = new int[512]; private void Init() { float[] triangleVertices = new float[] { 0.0f, 1.0f, 0.0f, -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f }; float[] _triangleColors = new float[] { 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f }; GL.GenBuffers(2, _vBO); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleVertices.Length), _triangleVertices, BufferUsageHint.StaticDraw); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[1]); GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(sizeof(float) * _triangleColors.Length), _triangleColors, BufferUsageHint.StaticDraw); GL.ColorPointer(3, ColorPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.ColorArray); //Selectbuffer set up GL.SelectBuffer(512, _selectBuffer); } private void glControlWindow_Paint(object sender, PaintEventArgs e) { GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); float[] eyes = { 0.0f, 0.0f, -10.0f }; float[] target = { 0.0f, 0.0f, 0.0f }; Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); //45 degree = 0.785398163 rads Matrix4 view = Matrix4.LookAt(eyes[0], eyes[1], eyes[2], target[0], target[1], target[2], 0, 1, 0); Matrix4 model = Matrix4.Identity; Matrix4 MV = view * model; //First Clear Buffers GL.Clear(ClearBufferMask.ColorBufferBit); GL.Clear(ClearBufferMask.DepthBufferBit); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); GL.LoadIdentity(); GL.LoadMatrix(ref MV); GL.Viewport(0, 0, glControlWindow.Width, glControlWindow.Height); GL.Enable(EnableCap.DepthTest); //Enable correct Z Drawings GL.DepthFunc(DepthFunction.Less); //Enable correct Z Drawings GL.MatrixMode(MatrixMode.Modelview); GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); DrawTriangle(); GL.PopMatrix(); //Finally... GraphicsContext.CurrentContext.VSync = true; //Caps frame rate as to not over run GPU glControlWindow.SwapBuffers(); //Takes from the 'GL' and puts into control } private void DrawTriangle() { GL.BindBuffer(BufferTarget.ArrayBuffer, _vBO[0]); GL.VertexPointer(3, VertexPointerType.Float, 0, 0); GL.EnableClientState(ArrayCap.VertexArray); GL.DrawArrays(BeginMode.Triangles, 0, 3); GL.DisableClientState(ArrayCap.VertexArray); } //mouse click event implementation private void glControlWindow_MouseClick(object sender, System.Windows.Forms.MouseEventArgs e) { //Enter Select mode. Pretend drawing. GL.RenderMode(RenderingMode.Select); int[] viewport = new int[4]; GL.GetInteger(GetPName.Viewport, viewport); GL.PushMatrix(); GL.MatrixMode(MatrixMode.Projection); GL.LoadIdentity(); GluPickMatrix(e.X, e.Y, 5, 5, viewport); Matrix4 projection = Matrix4.CreatePerspectiveFieldOfView(0.785398163f, 4.0f / 3.0f, 0.1f, 100f); // this projection matrix is the same as one in glControlWindow_Paint method. GL.LoadMatrix(ref projection); GL.MatrixMode(MatrixMode.Modelview); int i = 0; int hits; GL.PushMatrix(); GL.Translate(3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); i++; GL.PushMatrix(); GL.Translate(-3.0f, 0.0f, 0.0f); GL.PushName(i); DrawTriangle(); GL.PopName(); GL.PopMatrix(); hits = GL.RenderMode(RenderingMode.Render); .....hits processing code goes here... GL.PopMatrix(); glControlWindow.Invalidate(); } I expect to get only one hit everytime i click inside a triangle, but i always get 2 no matter where i click. I suspect there is something wrong with the implementation of the GluPickMatrix, I haven't figured out yet.

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • SPARC T4 ??????: SPARC T4 ??????????!!

    - by user13138700
    ?? 2011 ? 9 ?? SPARC T4 CPU ???????? SPARC T4 ????????????????2011??10?????????????????????????? ????????????????????SPARC T4 ?????????????????????????????????????????????????????????? SPARC T4 CPU ???? SPARC T4 ?????????????????????????????????? ??????????????????????4/4, 4/5, 4/6 ? 3???????? Oracle Open World 2012 ???????? Oracle Open World 2012 Tokyo ?? Oracle ?????&????? ??? Oracle Solaris ????????????·????????? SPARC&Solaris ??????????????SPARC&Solaris ????????????????????????????????????????????????????????????????????????? Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ??????????????? ????Oracle Open World 2012 Tokyo ?????????????????????????SPARC T4 ????? ????????????????? SPARC T4 ????????? SPARC T3 ????????(S2??)??????????????????????????(S3??)??????????????????? ???????" T " ???????????????(?)?????? SPARC T1/T2/T3 ???????????????????????????(????????)????????????????????????? ?SPARC T4 ????????????????????????????? ?SPARC T4 ???????DB?????????????????????????????? ???????????????? ????????????????????????????????????????????? ???? SPARC T3 ???????????????????????????2???????????? ????????????????????????????????????????????????????? ?????????????? SPARC T4 ????????????????????????????????????SPARC T4 ????????? SPARC T4 ??????????????????????????????????????????? ?????????????? T4 ??????????????????? SPARC ???????????????????????????????????????????????????????????????????&??????????? ?????????????????????????????????????????????????????????Web?????????????DB?????????????????????????????????????? (????????????) ???????????? SPARC T4 ????????????????????????????? < T4 ???????? > ??? SPARC ??(S3??)??? x5??????????????????? x2????????????????????? Crypto (?????)?????????? ?????????????????????????/???????????????? ?????? 1, 2,& 4 ??????????? < T4 ????? ??????? > 8x SPARC S3 ?? (64????/???) 4MB ?? L3 ????? (8???/16???) 8x9 ????? 4x DDR3 ??????????? @6.4Gbps 6x ?????????? @9.6Gbps 2x8 PCIe 2.0 (5GTS) 2x10Gb XAUI ??????? < S3???????????? > ALU : Arithmetic Logic Unit BRU : Branch Logic Unit FGU : Flouting-point Graphics Unit IRF : Integer Register File FRF : Flouting-point Register File WRF : Working Register File MMU : Memory Management Unit LSU : Load Store Unit Crypto(SPU) : Streaming Processing Unit TRU : Trap Logic Unit < S3????????? > ????? 8????/?? ?????? Out-of-Order ?? 16???????????????? ????????????? ???????????? ??????? ????????? 64???? ITLB ? 128???? DTLB 64KB 4??? L1 ?????????????? 128KB 8??? ???? L2 ????? < T4 ???????? vs T3 ???????? > T4 ????????????? Out-Of-Order ???? Pick ???????? In-Order ?? Pick ?????? Commit ??????? Out-Of-Order ?? Commit ?????? In-Order ?? < T4 ?????????? > ???????????vs????????????????????????????? ????????Active??????????????????? ???????????????????????? ??????????????????? < T4vsT1/T2/T3 ??????? > SPARC T4 ???? T3????????Web??????????? DB?????????????????????????????? ????????????????????SPARC T4 ?????&Solaris ?????????????(????????)??????????????????????????????????????????????????????????!!? ????Oracle Open World 2012 Tokyo ????????????????SPARC T4 ?????????????????????? 4/4, 4/5, 4/6 ?3????????????????????????????????????????????????????????????????????????????????????? ????????????????? URL http://www.oracle.com/openworld/jp-ja/exhibit/index.html

    Read the article

  • How to Run Low-Cost Minecraft on a Raspberry Pi for Block Building on the Cheap

    - by Jason Fitzpatrick
    We’ve shown you how to run your own blocktastic personal Minecraft server on a Windows/OSX box, but what if you crave something lighter weight, more energy efficient, and always ready for your friends? Read on as we turn a tiny Raspberry Pi machine into a low-cost Minecraft server you can leave on 24/7 for around a penny a day. Why Do I Want to Do This? There’s two aspects to this tutorial, running your own Minecraft server and specifically running that Minecraft server on a Raspberry Pi. Why would you want to run your own Minecraft server? It’s a really great way to extend and build upon the Minecraft play experience. You can leave the server running when you’re not playing so friends and family can join and continue building your world. You can mess around with game variables and introduce mods in a way that isn’t possible when you’re playing the stand-alone game. It also gives you the kind of control over your multiplayer experience that using public servers doesn’t, without incurring the cost of hosting a private server on a remote host. While running a Minecraft server on its own is appealing enough to a dedicated Minecraft fan, running it on the Raspberry Pi is even more appealing. The tiny little Pi uses so little resources that you can leave your Minecraft server running 24/7 for a couple bucks a year. Aside from the initial cost outlay of the Pi, an SD card, and a little bit of time setting it up, you’ll have an always-on Minecraft server at a monthly cost of around one gumball. What Do I Need? For this tutorial you’ll need a mix of hardware and software tools; aside from the actual Raspberry Pi and SD card, everything is free. 1 Raspberry Pi (preferably a 512MB model) 1 4GB+ SD card This tutorial assumes that you have already familiarized yourself with the Raspberry Pi and have installed a copy of the Debian-derivative Raspbian on the device. If you have not got your Pi up and running yet, don’t worry! Check out our guide, The HTG Guide to Getting Started with Raspberry Pi, to get up to speed. Optimizing Raspbian for the Minecraft Server Unlike other builds we’ve shared where you can layer multiple projects over one another (e.g. the Pi is more than powerful enough to serve as a weather/email indicator and a Google Cloud Print server at the same time) running a Minecraft server is a pretty intense operation for the little Pi and we’d strongly recommend dedicating the entire Pi to the process. Minecraft seems like a simple game, with all its blocky-ness and what not, but it’s actually a pretty complex game beneath the simple skin and required a lot of processing power. As such, we’re going to tweak the configuration file and other settings to optimize Rasbian for the job. The first thing you’ll need to do is dig into the Raspi-Config application to make a few minor changes. If you’re installing Raspbian fresh, wait for the last step (which is the Raspi-Config), if you already installed it, head to the terminal and type in “sudo raspi-config” to launch it again. One of the first and most important things we need to attend to is cranking up the overclock setting. We need all the power we can get to make our Minecraft experience enjoyable. In Raspi-Config, select option number 7 “Overclock”. Be prepared for some stern warnings about overclocking, but rest easy knowing that overclocking is directly supported by the Raspberry Pi foundation and has been included in the configuration options since late 2012. Once you’re in the actual selection screen, select “Turbo 1000MhHz”. Again, you’ll be warned that the degree of overclocking you’ve selected carries risks (specifically, potential corruption of the SD card, but no risk of actual hardware damage). Click OK and wait for the device to reset. Next, make sure you’re set to boot to the command prompt, not the desktop. Select number 3 “Enable Boot to Desktop/Scratch”  and make sure “Console Text console” is selected. Back at the Raspi-Config menu, select number 8 “Advanced Options’. There are two critical changes we need to make in here and one option change. First, the critical changes. Select A3 “Memory Split”: Change the amount of memory available to the GPU to 16MB (down from the default 64MB). Our Minecraft server is going to ruin in a GUI-less environment; there’s no reason to allocate any more than the bare minimum to the GPU. After selecting the GPU memory, you’ll be returned to the main menu. Select “Advanced Options” again and then select A4 “SSH”. Within the sub-menu, enable SSH. There is very little reason to keep this Pi connected to a monitor and keyboard, by enabling SSH we can remotely access the machine from anywhere on the network. Finally (and optionally) return again to the “Advanced Options” menu and select A2 “Hostname”. Here you can change your hostname from “raspberrypi” to a more fitting Minecraft name. We opted for the highly creative hostname “minecraft”, but feel free to spice it up a bit with whatever you feel like: creepertown, minecraft4life, or miner-box are all great minecraft server names. That’s it for the Raspbian configuration tab down to the bottom of the main screen and select “Finish” to reboot. After rebooting you can now SSH into your terminal, or continue working from the keyboard hooked up to your Pi (we strongly recommend switching over to SSH as it allows you to easily cut and paste the commands). If you’ve never used SSH before, check out how to use PuTTY with your Pi here. Installing Java on the Pi The Minecraft server runs on Java, so the first thing we need to do on our freshly configured Pi is install it. Log into your Pi via SSH and then, at the command prompt, enter the following command to make a directory for the installation: sudo mkdir /java/ Now we need to download the newest version of Java. At the time of this publication the newest release is the OCT 2013 update and the link/filename we use will reflect that. Please check for a more current version of the Linux ARMv6/7 Java release on the Java download page and update the link/filename accordingly when following our instructions. At the command prompt, enter the following command: sudo wget --no-check-certificate http://www.java.net/download/jdk8/archive/b111/binaries/jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz Once the download has finished successfully, enter the following command: sudo tar zxvf jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz -C /opt/ Fun fact: the /opt/ directory name scheme is a remnant of early Unix design wherein the /opt/ directory was for “optional” software installed after the main operating system; it was the /Program Files/ of the Unix world. After the file has finished extracting, enter: sudo /opt/jdk1.8.0/bin/java -version This command will return the version number of your new Java installation like so: java version "1.8.0-ea" Java(TM) SE Runtime Environment (build 1.8.0-ea-b111) Java HotSpot(TM) Client VM (build 25.0-b53, mixed mode) If you don’t see the above printout (or a variation thereof if you’re using a newer version of Java), try to extract the archive again. If you do see the readout, enter the following command to tidy up after yourself: sudo rm jdk-8-ea-b111-linux-arm-vfp-hflt-09_oct_2013.tar.gz At this point Java is installed and we’re ready to move onto installing our Minecraft server! Installing and Configuring the Minecraft Server Now that we have a foundation for our Minecraft server, it’s time to install the part that matter. We’ll be using SpigotMC a lightweight and stable Minecraft server build that works wonderfully on the Pi. First, grab a copy of the the code with the following command: sudo wget http://ci.md-5.net/job/Spigot/lastSuccessfulBuild/artifact/Spigot-Server/target/spigot.jar This link should remain stable over time, as it points directly to the most current stable release of Spigot, but if you have any issues you can always reference the SpigotMC download page here. After the download finishes successfully, enter the following command: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Note: if you’re running the command on a 256MB Pi change the 256 and 496 in the above command to 128 and 256, respectively. Your server will launch and a flurry of on-screen activity will follow. Be prepared to wait around 3-6 minutes or so for the process of setting up the server and generating the map to finish. Future startups will take much less time, around 20-30 seconds. Note: If at any point during the configuration or play process things get really weird (e.g. your new Minecraft server freaks out and starts spawning you in the Nether and killing you instantly), use the “stop” command at the command prompt to gracefully shutdown the server and let you restart and troubleshoot it. After the process has finished, head over to the computer you normally play Minecraft on, fire it up, and click on Multiplayer. You should see your server: If your world doesn’t popup immediately during the network scan, hit the Add button and manually enter the address of your Pi. Once you connect to the server, you’ll see the status change in the server status window: According to the server, we’re in game. According to the actual Minecraft app, we’re also in game but it’s the middle of the night in survival mode: Boo! Spawning in the dead of night, weaponless and without shelter is no way to start things. No worries though, we need to do some more configuration; no time to sit around and get shot at by skeletons. Besides, if you try and play it without some configuration tweaks first, you’ll likely find it quite unstable. We’re just here to confirm the server is up, running, and accepting incoming connections. Once we’ve confirmed the server is running and connectable (albeit not very playable yet), it’s time to shut down the server. Via the server console, enter the command “stop” to shut everything down. When you’re returned to the command prompt, enter the following command: sudo nano server.properties When the configuration file opens up, make the following changes (or just cut and paste our config file minus the first two lines with the name and date stamp): #Minecraft server properties #Thu Oct 17 22:53:51 UTC 2013 generator-settings= #Default is true, toggle to false allow-nether=false level-name=world enable-query=false allow-flight=false server-port=25565 level-type=DEFAULT enable-rcon=false force-gamemode=false level-seed= server-ip= max-build-height=256 spawn-npcs=true white-list=false spawn-animals=true texture-pack= snooper-enabled=true hardcore=false online-mode=true pvp=true difficulty=1 player-idle-timeout=0 gamemode=0 #Default 20; you only need to lower this if you're running #a public server and worried about loads. max-players=20 spawn-monsters=true #Default is 10, 3-5 ideal for Pi view-distance=5 generate-structures=true spawn-protection=16 motd=A Minecraft Server In the server status window, seen through your SSH connection to the pi, enter the following command to give yourself operator status on your Minecraft server (so that you can use more powerful commands in game, without always returning to the server status window). op [your minecraft nickname] At this point things are looking better but we still have a little tweaking to do before the server is really enjoyable. To that end, let’s install some plugins. The first plugin, and the one you should install above all others, is NoSpawnChunks. To install the plugin, first visit the NoSpawnChunks webpage and grab the download link for the most current version. As of this writing the current release is v0.3. Back at the command prompt (the command prompt of your Pi, not the server console–if your server is still active shut it down) enter the following commands: cd /home/pi/plugins sudo wget http://dev.bukkit.org/media/files/586/974/NoSpawnChunks.jar Next, visit the ClearLag plugin page, and grab the latest link (as of this tutorial, it’s v2.6.0). Enter the following at the command prompt: sudo wget http://dev.bukkit.org/media/files/743/213/Clearlag.jar Because the files aren’t compressed in a .ZIP or similar container, that’s all there is to it: the plugins are parked in the plugin directory. (Remember this for future plugin downloads, the file needs to be whateverplugin.jar, so if it’s compressed you need to uncompress it in the plugin directory.) Resart the server: sudo /opt/jdk1.8.0/bin/java -Xms256M -Xmx496M -jar /home/pi/spigot.jar nogui Be prepared for a slightly longer startup time (closer to the 3-6 minutes and much longer than the 30 seconds you just experienced) as the plugins affect the world map and need a minute to massage everything. After the spawn process finishes, type the following at the server console: plugins This lists all the plugins currently active on the server. You should see something like this: If the plugins aren’t loaded, you may need to stop and restart the server. After confirming your plugins are loaded, go ahead and join the game. You should notice significantly snappier play. In addition, you’ll get occasional messages from the plugins indicating they are active, as seen below: At this point Java is installed, the server is installed, and we’ve tweaked our settings for for the Pi.  It’s time to start building with friends!     

    Read the article

  • CodePlex Daily Summary for Tuesday, June 14, 2011

    CodePlex Daily Summary for Tuesday, June 14, 2011Popular ReleasesSizeOnDisk: 1.0.9.0: Can handle Right-To-Left languages (issue 316) About box (issue 310) New language: Deutsch (thanks to kyoka) Fix: file and folder context menuTerrariViewer: TerrariViewer v2.6: This is a temporary release so that people can edit their characters for the newest version of Terraria. It does not include the newest items or the ability to add social slot items. Those will come in TerrariViewer v3.0, which I am currently working on.DropBox Linker: DropBox Linker 1.1: Added different popup descriptions for actions (copy/append/update/remove) Added popup timeout control (with live preview) Added option to overwrite clipboard with the last link only Notification popup closes on user click Notification popup default timeout increased to 3 sec. Added codeplex link to about .NET Framework 4.0 Client Profile requiredMobile Device Detection and Redirection: 1.0.4.1: Stable Release 51 Degrees.mobi Foundation is the best way to detect and redirect mobile devices and their capabilities on ASP.NET and is being used on thousands of websites worldwide. We’re highly confident in our software and we recommend all users update to this version. Changes to Version 1.0.4.1Changed the BlackberryHandler and BlackberryVersion6Handler to have equal CONFIDENCE values to ensure they both get a chance at detecting BlackBerry version 4&5 and version 6 devices. Prior to thi...Kouak - HTTP File Share Server: Kouak Beta 3 - Clean: Some critical bug solved and dependecy problems There's 3 package : - The first, contains the cli server and the graphical server. - The second, only the cli server - The third, only the graphical client. It's a beta release, so don't hesitate to emmit issue ;pRawr: Rawr 4.1.06: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta6: ??AcDown?????????????,?????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta6 ?????(imanhua.com)????? ???? ?? ??"????","?????","?????","????"?????? "????"?????"????????"?? ??????????? ?????????????? ?????????????/???? ?? ????Windows 7???????????? ????????? ?? ????????????? ???????/??????????? ???????????? ?? ?? ?????(imanh...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.22: Added lots of optimizations related to expression optimization. Combining adjacent expression statements into a single statement means more if-, for-, and while-statements can get rid of the curly-braces. Then more if-statements can be converted to expressions, and more expressions can be combined with return- and for-statements. Moving functions to the top of their scopes, followed by var-statements, provides more opportunities for expression-combination. Added command-line option to remove ...Pulse: Pulse Beta 2: - Added new wallpapers provider http://wallbase.cc. Supports english search, multiple keywords* - Improved font rendering in Options window - Added "Set wallpaper as logon background" option* - Fixed crashes if there is no internet connection - Fixed: Rewalls downloads empty images sometimes - Added filters* Note 1: wallbase provider supports only english search. Rewalls provider supports only russian search but Pulse automatically translates your english keyword into russian using Google Tr...???? (Internet Go Game for Android): goapp.3.0.apk: Refresh UI and bugfixPhalanger - The PHP Language Compiler for the .NET Framework: 2.1 (June 2011) for .NET 4.0: Release of Phalanger 2.1 - the opensource PHP compiler for .NET framework 4.0. Installation package also includes basic version of Phalanger Tools for Visual Studio 2010. This allows you to easily create, build and debug Phalanger web or application inside this ultimate integrated development environment. You can even install the tools into the free Visual Studio 2010 Shell (Integrated). To improve the performance of your application using MySQL, please use Managed MySQL Extension for Phala...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.0.0.7: Version: 2.0.0.7 (Milestone 7): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Remark The sample applications are using Microsoft’s IoC container MEF. However, the WPF Application Framework (WAF) doesn’t force you to use the same IoC container in your application. You can use ...SimplePlanner: v2.0b: For 2011-2012 Sem 1 ???2011-2012 ????Visual Studio 2010 Help Downloader: 1.0.0.3: Domain name support for proxy Cleanup old packages bug Writing to EventLog with UAC enabled bug Small fixes & Refactoring32feet.NET: 3.2: 32feet.NET v3.2 - Personal Area Networking for .NET Build 3.2.0609.0 9th June 2011 This library provides a .NET networking API for devices and desktop computers running the Microsoft or Broadcom/Widcomm Bluetooth stacks, Microsoft Windows supported IrDA devices and associated Object Exchange (OBEX) services for both these mediums. Online documentation is integrated into your Visual Studio help. The object model has been designed to promote consistency between Bluetooth, IrDA and traditional ...Media Companion: MC 3.406b weekly: With this version change a movie rebuild is required when first run -else MC will lock up on exit. Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you find MC not displaying movie data properly, please try a 'movie rebuild' to reload the data from the nfo's into MC's cache. Fixes Movies Readded movie preference to rename invalid or scene nfo's to info ext...Windows Azure VM Assistant: AzureVMAssist V1.0.0.5: AzureVMAssist V1.0.0.5 (Debug) - Test Release VersionNetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9: Changes: - fix examples (include issue 16026) - add new examples - 32Bit/64Bit Walkthrough is now available in technical Documentation. Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...Reusable Library: V1.1.3: A collection of reusable abstractions for enterprise application developerVidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.New ProjectsASP.NET MVC / Windows Workflow Foundation Integration: This project will product libraries, activities and examples that demonstrate how you can use Windows Workflow Foundation with ASP.NET MVCBing Maps Spatial Data Service Loader: Bing Maps Spatial Data Service Loader is a simple tool that helps you to load your set of data (CSV) into the Spatial Data Service included in Bing Maps licensing.Blend filter and Pencil sketch effect in C#: This project contains a filter for blending one image over an other using effects like Color Dodge, Lighten, Darken, Difference, etc. And an example of how to use that to do a Pencil Sketch effect in C# using AForge frameworkBugStoryV2: student project !CAIXA LOTERIAS: Projeto destinado a integrar resultado dos sorteios da Caixa em VB.NET, C#, ASP.NETCSReports: Report authoring tool. It supports dinamyc grouping, scripting formulas in vbscript, images from db and sub-sections. The report definition is saved to file in xml format and can be exported to pdf, doc and xls format.EasyLibrary: EasyLibraryFreetime Development Platform: Freetime Development Platform is a set of reusable design used to develop Web and Desktop based applications. IISProcessScheduler: Schedule processes from within IIS.LitleChef: LitleChefLLBLGen ANGTE (ASP.Net GUI Templates Extended): LLBLGen ANGTE (ASP.Net GUI Templates Extended) makes possible to use LLBLGen template system to generate a reasonable ASP.Net GUI that you can use as the base of your project or just as a prototyping tool. It generates APS.Net code with C# as code behind using .Net 2.0Memory: Memory Game for AndroidNCU - Book store: Educational project for a software engineering bootcamp at Northern Caribbean UniversityOrderToList Extension for IEnumerable: An extension method for IEnumerable<T> that will sort the IEnumerable based on a list of keys. Suppose you have a list of IDs {10, 5, 12} and want to use LINQ to retrieve all People from the DB with those IDs in that order, you want this extension. Sort is binary so it's fastp301: Old project.Pang: A new pong rip off; including some power ups. Will be written in C# using XNA.Ring2Park Online: Ring2Park is a fully working ASP.NET reference application that simulates the online purchase and management of Vehicle Parking sessions. It is developed using the latest ASP.NET MVC 3 patterns and capabilities. It includes a complete set of Application Lifecycle Management (ALM) assets, including requirements, test and deployment.rITIko: Questo progetto è stato creato come esperimento dalla classe 4G dell'ITIS B. Pascal di Cesena. Serve (per ora) solo per testate il funzionamento di CodePlex e del sistema SVN. Forse un giorno conterrà qualcosa di buono, rimanete aggiornati.Rug.Cmd - Command Line Parser and Console Application Framework: Rugland Console Framework is a collection of classes to enable the fast and consistent development of .NET console applications. Parse command line arguments and write your applications usage. If you are developing command line or build process tools in .NET then this maybe the lightweight framework for you. It is developed in C#.SamaToursUSA: sama tours usaSamcrypt: .Security System: With this program you will be able to secure your screen from prying eyes. In fact as soon as the block will not be unlocked only with your password. Perfect for when you go to have a coffee in the break. :)Sharp Temperature Conversor: Sharp Temperature Conversor has the objective of providing a easy, fast and a direct way to convert temperatures from one type to another. The project is small, uses C#, Visual Studio 2010. The project is small. However, the growth is possible. Silverlight Star Rating Control: A simple star rating control for editing or displaying ratings in Silverlight. Supports half-filled stars. Includes the star shape as a separate control.Silverware: Silverware is a set of libraries to enhance and make application development in Silverlight easier.Simple & lightweight fluent interface for Design by Contract: This project try to focus on make a good quality code for your project. It try to make sure all instance and variables must be satisfy some conditions in Design by Contract.SimpleAspect: A simple Aspect library for PostSharpTespih: Basit tespih uygulamasi. Kendi evrad u ezkar listenizi olusturup seçtiginiz bir evrat üzerinden tespih çekebilirsiniz.Texticize: Texticize is a fast, extensible, and intuitive object-to-text template engine for .NET. You can use Texticize to quickly create dynamic e-mails, letters, source code, or any other text documents using predefined text templates substituting placeholders with properties of CLR objects in realtime.The Dragon riders: a mmorgh game with free play in creaction. FantasyWPF ObservableCollection: WPF ObservableCollection Idle use.X9.37 Image Cash Letter file viewer: x9.37 Image Cash Letter viewer. Developed using VB.Net. Currently allows viewing and searching of X9 files. The code also has image manipulation code, specifically multi page TIF file handling that you might find useful,

    Read the article

  • Acer aspire 5735z wireless not working after upgrade to 11.10

    - by Jon
    I cant get my wifi card to work at all after upgrading to 11.10 Oneiric. I'm not sure where to start to fix this. Ive tried using the additional drivers tool but this shows that no additional drivers are needed. Before my upgrade I had a drivers working for the Rt2860 chipset. Any help on this would be much appreciated.... thanks Jon jon@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 00:1d:72:ec:76:d5 inet addr:192.168.1.134 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::21d:72ff:feec:76d5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:7846 errors:0 dropped:0 overruns:0 frame:0 TX packets:7213 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8046624 (8.0 MB) TX bytes:1329442 (1.3 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:91 errors:0 dropped:0 overruns:0 frame:0 TX packets:91 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34497 (34.4 KB) TX bytes:34497 (34.4 KB) Ive included by dmesg output below [ 0.428818] NET: Registered protocol family 2 [ 0.429003] IP route cache hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.430562] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) [ 0.436614] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) [ 0.437409] TCP: Hash tables configured (established 524288 bind 65536) [ 0.437412] TCP reno registered [ 0.437431] UDP hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437482] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) [ 0.437678] NET: Registered protocol family 1 [ 0.437705] pci 0000:00:02.0: Boot video device [ 0.437892] PCI: CLS 64 bytes, default 64 [ 0.437916] Simple Boot Flag at 0x57 set to 0x1 [ 0.438294] audit: initializing netlink socket (disabled) [ 0.438309] type=2000 audit(1319243447.432:1): initialized [ 0.440763] Freeing initrd memory: 13416k freed [ 0.468362] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 0.488192] VFS: Disk quotas dquot_6.5.2 [ 0.488254] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) [ 0.488888] fuse init (API version 7.16) [ 0.488985] msgmni has been set to 5890 [ 0.489381] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 0.489413] io scheduler noop registered [ 0.489415] io scheduler deadline registered [ 0.489460] io scheduler cfq registered (default) [ 0.489583] pcieport 0000:00:1c.0: setting latency timer to 64 [ 0.489633] pcieport 0000:00:1c.0: irq 40 for MSI/MSI-X [ 0.489699] pcieport 0000:00:1c.1: setting latency timer to 64 [ 0.489741] pcieport 0000:00:1c.1: irq 41 for MSI/MSI-X [ 0.489800] pcieport 0000:00:1c.2: setting latency timer to 64 [ 0.489841] pcieport 0000:00:1c.2: irq 42 for MSI/MSI-X [ 0.489904] pcieport 0000:00:1c.3: setting latency timer to 64 [ 0.489944] pcieport 0000:00:1c.3: irq 43 for MSI/MSI-X [ 0.490006] pcieport 0000:00:1c.4: setting latency timer to 64 [ 0.490047] pcieport 0000:00:1c.4: irq 44 for MSI/MSI-X [ 0.490126] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 0.490149] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 0.490196] intel_idle: MWAIT substates: 0x1110 [ 0.490198] intel_idle: does not run on family 6 model 15 [ 0.491240] ACPI: Deprecated procfs I/F for AC is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.493473] ACPI: AC Adapter [ADP1] (on-line) [ 0.493590] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 0.496771] ACPI: Lid Switch [LID0] [ 0.496818] input: Sleep Button as /devices/LNXSYSTM:00/device:00/PNP0C0E:00/input/input1 [ 0.496823] ACPI: Sleep Button [SLPB] [ 0.496865] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 [ 0.496869] ACPI: Power Button [PWRF] [ 0.496900] ACPI: acpi_idle registered with cpuidle [ 0.498719] Monitor-Mwait will be used to enter C-1 state [ 0.498753] Monitor-Mwait will be used to enter C-2 state [ 0.498761] Marking TSC unstable due to TSC halts in idle [ 0.517627] thermal LNXTHERM:00: registered as thermal_zone0 [ 0.517630] ACPI: Thermal Zone [TZS0] (67 C) [ 0.524796] thermal LNXTHERM:01: registered as thermal_zone1 [ 0.524799] ACPI: Thermal Zone [TZS1] (67 C) [ 0.524823] ACPI: Deprecated procfs I/F for battery is loaded, please retry with CONFIG_ACPI_PROCFS_POWER cleared [ 0.524852] ERST: Table is not found! [ 0.524948] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 0.680991] ACPI: Battery Slot [BAT0] (battery present) [ 0.688567] Linux agpgart interface v0.103 [ 0.688672] agpgart-intel 0000:00:00.0: Intel GM45 Chipset [ 0.688865] agpgart-intel 0000:00:00.0: detected gtt size: 2097152K total, 262144K mappable [ 0.689786] agpgart-intel 0000:00:00.0: detected 65536K stolen memory [ 0.689912] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0xd0000000 [ 0.691006] brd: module loaded [ 0.691510] loop: module loaded [ 0.691967] Fixed MDIO Bus: probed [ 0.691990] PPP generic driver version 2.4.2 [ 0.692065] tun: Universal TUN/TAP device driver, 1.6 [ 0.692067] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 0.692146] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 0.692181] ehci_hcd 0000:00:1a.7: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.692206] ehci_hcd 0000:00:1a.7: setting latency timer to 64 [ 0.692210] ehci_hcd 0000:00:1a.7: EHCI Host Controller [ 0.692255] ehci_hcd 0000:00:1a.7: new USB bus registered, assigned bus number 1 [ 0.692289] ehci_hcd 0000:00:1a.7: debug port 1 [ 0.696181] ehci_hcd 0000:00:1a.7: cache line size of 64 is not supported [ 0.696202] ehci_hcd 0000:00:1a.7: irq 20, io mem 0xf8904800 [ 0.712014] ehci_hcd 0000:00:1a.7: USB 2.0 started, EHCI 1.00 [ 0.712131] hub 1-0:1.0: USB hub found [ 0.712136] hub 1-0:1.0: 6 ports detected [ 0.712230] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.712243] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 0.712247] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 0.712287] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 2 [ 0.712315] ehci_hcd 0000:00:1d.7: debug port 1 [ 0.716201] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 0.716216] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xf8904c00 [ 0.732014] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 0.732130] hub 2-0:1.0: USB hub found [ 0.732135] hub 2-0:1.0: 6 ports detected [ 0.732209] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 0.732223] uhci_hcd: USB Universal Host Controller Interface driver [ 0.732254] uhci_hcd 0000:00:1a.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.732262] uhci_hcd 0000:00:1a.0: setting latency timer to 64 [ 0.732265] uhci_hcd 0000:00:1a.0: UHCI Host Controller [ 0.732298] uhci_hcd 0000:00:1a.0: new USB bus registered, assigned bus number 3 [ 0.732325] uhci_hcd 0000:00:1a.0: irq 20, io base 0x00001820 [ 0.732441] hub 3-0:1.0: USB hub found [ 0.732445] hub 3-0:1.0: 2 ports detected [ 0.732508] uhci_hcd 0000:00:1a.1: PCI INT B -> GSI 20 (level, low) -> IRQ 20 [ 0.732514] uhci_hcd 0000:00:1a.1: setting latency timer to 64 [ 0.732518] uhci_hcd 0000:00:1a.1: UHCI Host Controller [ 0.732553] uhci_hcd 0000:00:1a.1: new USB bus registered, assigned bus number 4 [ 0.732577] uhci_hcd 0000:00:1a.1: irq 20, io base 0x00001840 [ 0.732696] hub 4-0:1.0: USB hub found [ 0.732700] hub 4-0:1.0: 2 ports detected [ 0.732762] uhci_hcd 0000:00:1a.2: PCI INT C -> GSI 20 (level, low) -> IRQ 20 [ 0.732768] uhci_hcd 0000:00:1a.2: setting latency timer to 64 [ 0.732772] uhci_hcd 0000:00:1a.2: UHCI Host Controller [ 0.732805] uhci_hcd 0000:00:1a.2: new USB bus registered, assigned bus number 5 [ 0.732829] uhci_hcd 0000:00:1a.2: irq 20, io base 0x00001860 [ 0.732942] hub 5-0:1.0: USB hub found [ 0.732946] hub 5-0:1.0: 2 ports detected [ 0.733007] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 [ 0.733014] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 0.733017] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 0.733057] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 6 [ 0.733082] uhci_hcd 0000:00:1d.0: irq 23, io base 0x00001880 [ 0.733202] hub 6-0:1.0: USB hub found [ 0.733206] hub 6-0:1.0: 2 ports detected [ 0.733265] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.733273] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 0.733276] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 0.733313] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 7 [ 0.733351] uhci_hcd 0000:00:1d.1: irq 17, io base 0x000018a0 [ 0.733466] hub 7-0:1.0: USB hub found [ 0.733470] hub 7-0:1.0: 2 ports detected [ 0.733532] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 [ 0.733539] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 0.733542] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 0.733578] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 8 [ 0.733610] uhci_hcd 0000:00:1d.2: irq 18, io base 0x000018c0 [ 0.733730] hub 8-0:1.0: USB hub found [ 0.733736] hub 8-0:1.0: 2 ports detected [ 0.733843] i8042: PNP: PS/2 Controller [PNP0303:KBD0,PNP0f13:PS2M] at 0x60,0x64 irq 1,12 [ 0.751594] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 0.751605] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 0.751732] mousedev: PS/2 mouse device common for all mice [ 0.752670] rtc_cmos 00:08: RTC can wake from S4 [ 0.752770] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 0.752796] rtc0: alarms up to one month, y3k, 242 bytes nvram, hpet irqs [ 0.752907] device-mapper: uevent: version 1.0.3 [ 0.752976] device-mapper: ioctl: 4.20.0-ioctl (2011-02-02) initialised: [email protected] [ 0.753028] cpuidle: using governor ladder [ 0.753093] cpuidle: using governor menu [ 0.753096] EFI Variables Facility v0.08 2004-May-17 [ 0.753361] TCP cubic registered [ 0.753482] NET: Registered protocol family 10 [ 0.753966] NET: Registered protocol family 17 [ 0.753992] Registering the dns_resolver key type [ 0.754113] PM: Hibernation image not present or could not be loaded. [ 0.754131] registered taskstats version 1 [ 0.771553] Magic number: 15:152:507 [ 0.771667] rtc_cmos 00:08: setting system clock to 2011-10-22 00:30:48 UTC (1319243448) [ 0.772238] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 0.772240] EDD information not available. [ 0.774165] Freeing unused kernel memory: 984k freed [ 0.774504] Write protecting the kernel read-only data: 10240k [ 0.774755] Freeing unused kernel memory: 20k freed [ 0.775093] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input3 [ 0.779727] Freeing unused kernel memory: 1400k freed [ 0.801946] udevd[84]: starting version 173 [ 0.880950] sky2: driver version 1.28 [ 0.881046] sky2 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.881096] sky2 0000:02:00.0: setting latency timer to 64 [ 0.881197] sky2 0000:02:00.0: Yukon-2 Extreme chip revision 2 [ 0.881871] sky2 0000:02:00.0: irq 45 for MSI/MSI-X [ 0.896273] sky2 0000:02:00.0: eth0: addr 00:1d:72:ec:76:d5 [ 0.910630] ahci 0000:00:1f.2: version 3.0 [ 0.910647] ahci 0000:00:1f.2: PCI INT B -> GSI 19 (level, low) -> IRQ 19 [ 0.910710] ahci 0000:00:1f.2: irq 46 for MSI/MSI-X [ 0.910775] ahci: SSS flag set, parallel bus scan disabled [ 0.910812] ahci 0000:00:1f.2: AHCI 0001.0200 32 slots 4 ports 3 Gbps 0x33 impl SATA mode [ 0.910816] ahci 0000:00:1f.2: flags: 64bit ncq sntf stag pm led clo pio slum part ccc ems sxs [ 0.910821] ahci 0000:00:1f.2: setting latency timer to 64 [ 0.941773] scsi0 : ahci [ 0.941954] scsi1 : ahci [ 0.942038] scsi2 : ahci [ 0.942118] scsi3 : ahci [ 0.942196] scsi4 : ahci [ 0.942268] scsi5 : ahci [ 0.942332] ata1: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904100 irq 46 [ 0.942336] ata2: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904180 irq 46 [ 0.942339] ata3: DUMMY [ 0.942340] ata4: DUMMY [ 0.942344] ata5: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904300 irq 46 [ 0.942347] ata6: SATA max UDMA/133 abar m2048@0xf8904000 port 0xf8904380 irq 46 [ 1.028061] usb 1-5: new high speed USB device number 2 using ehci_hcd [ 1.181775] usbcore: registered new interface driver uas [ 1.260062] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.261126] ata1.00: ATA-8: Hitachi HTS543225L9A300, FBEOC40C, max UDMA/133 [ 1.261129] ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA [ 1.262360] ata1.00: configured for UDMA/133 [ 1.262518] scsi 0:0:0:0: Direct-Access ATA Hitachi HTS54322 FBEO PQ: 0 ANSI: 5 [ 1.262716] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 1.262762] sd 0:0:0:0: [sda] 488397168 512-byte logical blocks: (250 GB/232 GiB) [ 1.262824] sd 0:0:0:0: [sda] Write Protect is off [ 1.262827] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 [ 1.262851] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 1.287277] sda: sda1 sda2 sda3 [ 1.287693] sd 0:0:0:0: [sda] Attached SCSI disk [ 1.580059] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 1.581188] ata2.00: ATAPI: HL-DT-STDVDRAM GT10N, 1.00, max UDMA/100 [ 1.582663] ata2.00: configured for UDMA/100 [ 1.584162] scsi 1:0:0:0: CD-ROM HL-DT-ST DVDRAM GT10N 1.00 PQ: 0 ANSI: 5 [ 1.585821] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [ 1.585824] cdrom: Uniform CD-ROM driver Revision: 3.20 [ 1.585953] sr 1:0:0:0: Attached scsi CD-ROM sr0 [ 1.586038] sr 1:0:0:0: Attached scsi generic sg1 type 5 [ 1.632061] usb 6-1: new low speed USB device number 2 using uhci_hcd [ 1.908056] ata5: SATA link down (SStatus 0 SControl 300) [ 2.228065] ata6: SATA link down (SStatus 0 SControl 300) [ 2.228955] Initializing USB Mass Storage driver... [ 2.229052] usbcore: registered new interface driver usb-storage [ 2.229054] USB Mass Storage support registered. [ 2.235827] scsi6 : usb-storage 1-5:1.0 [ 2.235987] usbcore: registered new interface driver ums-realtek [ 2.244451] input: B16_b_02 USB-PS/2 Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb6/6-1/6-1:1.0/input/input4 [ 2.244598] generic-usb 0003:046D:C025.0001: input,hidraw0: USB HID v1.10 Mouse [B16_b_02 USB-PS/2 Optical Mouse] on usb-0000:00:1d.0-1/input0 [ 2.244620] usbcore: registered new interface driver usbhid [ 2.244622] usbhid: USB HID core driver [ 3.091083] EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null) [ 3.238275] scsi 6:0:0:0: Direct-Access Generic- Multi-Card 1.00 PQ: 0 ANSI: 0 CCS [ 3.348261] sd 6:0:0:0: Attached scsi generic sg2 type 0 [ 3.351897] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 47.138012] udevd[334]: starting version 173 [ 47.177678] lp: driver loaded but no devices found [ 47.197084] wmi: Mapper loaded [ 47.197526] acer_wmi: Acer Laptop ACPI-WMI Extras [ 47.210227] acer_wmi: Brightness must be controlled by generic video driver [ 47.566578] Disabling lock debugging due to kernel taint [ 47.584050] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 47.620666] type=1400 audit(1319239895.347:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=624 comm="apparmor_parser" [ 47.620934] type=1400 audit(1319239895.347:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=624 comm="apparmor_parser" [ 47.621108] type=1400 audit(1319239895.347:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=624 comm="apparmor_parser" [ 47.633056] [drm] Initialized drm 1.1.0 20060810 [ 47.722594] i915 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 47.722602] i915 0000:00:02.0: setting latency timer to 64 [ 47.807152] ndiswrapper (check_nt_hdr:141): kernel is 64-bit, but Windows driver is not 64-bit;bad magic: 010B [ 47.807159] ndiswrapper (load_sys_files:206): couldn't prepare driver 'rt2860' [ 47.807930] ndiswrapper (load_wrap_driver:108): couldn't load driver rt2860; check system log for messages from 'loadndisdriver' [ 47.856250] usbcore: registered new interface driver ndiswrapper [ 47.861772] i915 0000:00:02.0: irq 47 for MSI/MSI-X [ 47.861781] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 47.861783] [drm] Driver supports precise vblank timestamp query. [ 47.861842] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem [ 47.980620] fixme: max PWM is zero. [ 48.286153] fbcon: inteldrmfb (fb0) is primary device [ 48.287033] Console: switching to colour frame buffer device 170x48 [ 48.287062] fb0: inteldrmfb frame buffer device [ 48.287064] drm: registered panic notifier [ 48.333883] acpi device:02: registered as cooling_device2 [ 48.334053] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 48.334128] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 48.334203] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 48.334644] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334652] HDA Intel 0000:00:1b.0: power state changed by ACPI to D0 [ 48.334673] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 48.334737] HDA Intel 0000:00:1b.0: irq 48 for MSI/MSI-X [ 48.334772] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 48.356107] Adding 261116k swap on /host/ubuntu/disks/swap.disk. Priority:-1 extents:1 across:261116k [ 48.380946] hda_codec: ALC268: BIOS auto-probing. [ 48.390242] input: HDA Intel Mic as /devices/pci0000:00/0000:00:1b.0/sound/card0/input6 [ 48.390365] input: HDA Intel Headphone as /devices/pci0000:00/0000:00:1b.0/sound/card0/input7 [ 48.490870] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr [ 48.917990] ppdev: user-space parallel port driver [ 48.950729] type=1400 audit(1319239896.675:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=941 comm="apparmor_parser" [ 48.951114] type=1400 audit(1319239896.675:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=941 comm="apparmor_parser" [ 48.977706] Synaptics Touchpad, model: 1, fw: 7.2, id: 0x1c0b1, caps: 0xd04733/0xa44000/0xa0000 [ 49.048871] input: SynPS/2 Synaptics TouchPad as /devices/platform/i8042/serio1/input/input8 [ 49.078713] sky2 0000:02:00.0: eth0: enabling interface [ 49.079462] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 50.762266] sky2 0000:02:00.0: eth0: Link is up at 100 Mbps, full duplex, flow control rx [ 50.762702] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 54.751478] type=1400 audit(1319239902.475:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm-guest-session-wrapper" pid=1039 comm="apparmor_parser" [ 54.755907] type=1400 audit(1319239902.479:8): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=1040 comm="apparmor_parser" [ 54.756237] type=1400 audit(1319239902.483:9): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1040 comm="apparmor_parser" [ 54.756417] type=1400 audit(1319239902.483:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=1040 comm="apparmor_parser" [ 54.764825] type=1400 audit(1319239902.491:11): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince" pid=1041 comm="apparmor_parser" [ 54.768365] type=1400 audit(1319239902.495:12): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-previewer" pid=1041 comm="apparmor_parser" [ 54.770601] type=1400 audit(1319239902.495:13): apparmor="STATUS" operation="profile_load" name="/usr/bin/evince-thumbnailer" pid=1041 comm="apparmor_parser" [ 54.770729] type=1400 audit(1319239902.495:14): apparmor="STATUS" operation="profile_load" name="/usr/share/gdm/guest-session/Xsession" pid=1038 comm="apparmor_parser" [ 54.775181] type=1400 audit(1319239902.499:15): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1043 comm="apparmor_parser" [ 54.775533] type=1400 audit(1319239902.499:16): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1043 comm="apparmor_parser" [ 54.936691] init: failsafe main process (891) killed by TERM signal [ 54.944583] init: apport pre-start process (1096) terminated with status 1 [ 55.000373] init: apport post-stop process (1160) terminated with status 1 [ 55.005291] init: gdm main process (1159) killed by TERM signal [ 59.782579] EXT4-fs (loop0): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 [ 60.992021] eth0: no IPv6 routers present [ 61.936072] device eth0 entered promiscuous mode [ 62.053949] Bluetooth: Core ver 2.16 [ 62.054005] NET: Registered protocol family 31 [ 62.054007] Bluetooth: HCI device and connection manager initialized [ 62.054010] Bluetooth: HCI socket layer initialized [ 62.054012] Bluetooth: L2CAP socket layer initialized [ 62.054993] Bluetooth: SCO socket layer initialized [ 62.058750] Bluetooth: RFCOMM TTY layer initialized [ 62.058758] Bluetooth: RFCOMM socket layer initialized [ 62.058760] Bluetooth: RFCOMM ver 1.11 [ 62.059428] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 62.059432] Bluetooth: BNEP filters: protocol multicast [ 62.460389] init: plymouth-stop pre-start process (1662) terminated with status 1 '

    Read the article

  • CodePlex Daily Summary for Thursday, August 01, 2013

    CodePlex Daily Summary for Thursday, August 01, 2013Popular ReleasesmyCollections: Version 2.7.11.0: New in this version : Added Copy To functionality (useful if you have NAS or media player). You can now use you web cam to scan UPC Code or take picture for cover. Added Persian Language Improved Ukrainian Translation Improved Pdf export Improved Cover Flow Improved TMDB Information. Improved Zappiti and Dune player compatibility Improved GameDB provider. Fix IMDB provider. Fix issue with rating BugFixing and performance improvement.wsubi: wsubi-1.6: Enhancements included in this release: Implement issue #6 - Add a 'query' Command for .sql scripts You can now run T-SQL scripts with the application. Results can be out for review to the console or saved to a file. For more details on running queries, check the updated Documentation page.SuperSocket, an extensible socket application framework: SuperSocket 1.6 beta 2: The changes included in this release: introduced ServerManager improved the code about process level isolation added new configuration attribute "storeLocation" for the certificate node added sendTimeOut support for async sending fixed a bug that when you close a session, the data is being sent won't be sent suppressed the socket error 10060 fixed the default clear idle session parameters in the config model class added configuration section detecting and give proper exception ...GoAgent GUI: GoAgent GUI 1.4.0: ??????????: Windows XP SP3 Windows Vista SP1 Windows 7 Windows 8 Windows 8.1 ??Windows 8.1???????????? ????: .Net Framework 4.0 http://59.111.20.19/download/17718/33240761/4/exe/69/54/177181/dotNetFx40Fullx86x64.exe Microsoft Visual C++ 2010 Redistributable Package http://59.111.20.23/download/4894298/8555584/1/exe/28/176/1348305610524944/vcredistx86.exe ???????????????。 ?????Windows XP?Windows 7????,???????????,?issue??????。uygw@outlook.comMVC Generator: MVC Generator Visual Studio Addin: This is the latest build, this includes the MVCGenerator.dll, and Visual Studio Addin file. See the home page of this project for installation instructions.nopCommerce. Open source shopping cart (ASP.NET MVC): nopCommerce 3.10: Highlight features & improvements: • Performance optimization. • New more user-friendly product/product-variant logic. Now we'll have only products (simple and grouped). • Bundle products support added. • Allow a store owner to associate product image for product variant attribute values. To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).ExtJS based ASP.NET Controls: FineUI v3.3.1: ??FineUI ?? ExtJS ??? ASP.NET ???。 FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ???????。 ?????? IE 7.0、Firefox 3.6、Chrome 3.0、Opera 10.5、Safari 3.0+ ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license)。 ???? ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,???????????ExtJS?: 1. ????? FineUI ? ExtJS ?:http://fineui.com/bbs/fo...AutoNLayered - Domain Oriented N-Layered .NET 4.5: AutoNLayered v1.0.5: - Fix Dtos. Abstract collections replaced by concrete (correct serialization WCF). - OrderBy in navigation properties. - Unit Test with Fakes. - Map of entities/dto moved to application services. - Libraries updated. Warning using Fakes: http://connect.microsoft.com/VisualStudio/feedback/details/782031/visual-studio-2012-add-fakes-assembly-does-not-add-all-needed-referencesPath Copy Copy: 11.1: Minor release with two new features: Submenu's contextual menu item now has an icon next to it Added reference to JavaScript regular expression format in Settings application Since this release does not have any glaring bug fixes, it is more of an optional update for existing users. It depends on whether you want to be able to spot the Path Copy Copy submenu more easily. I recommend you install it to see if the icon makes sense. As always, please don't hesitate to leave feedback via Discus...Lib.Web.Mvc & Yet another developer blog: Lib.Web.Mvc 6.3.0: Lib.Web.Mvc is a library which contains some helper classes for ASP.NET MVC such as strongly typed jqGrid helper, XSL transformation HtmlHelper/ActionResult, FileResult with range request support, custom attributes and more. Release contains: Lib.Web.Mvc.dll with xml documentation file Standalone documentation in chm file and change log Library source code Sample application for strongly typed jqGrid helper is available here. Sample application for XSL transformation HtmlHelper/ActionRe...Media Companion: Media Companion MC3.574b: Some good bug fixes been going on with the new XBMC-Link function. Thanks to all who were able to do testing and gave feedback. New:* Added some adhoc extra General movie filters, one of which is Plot = Outline (see fixes above). To see the filters, add the following line to your config.xml: <ShowExtraMovieFilters>True</ShowExtraMovieFilters>. The others are: Imdb in folder name, Imdb in not folder name & Imdb not in folder name & year mismatch. * Movie - display <tag> list on browser tab ...OfflineBrowser: Preview Release with Search: I've added search to this release.VG-Ripper & PG-Ripper: VG-Ripper 2.9.46: changes FIXED LoginFIM 2010 GoogleApps MA: GoogleAppsMA1.1.2: Fixed bug during import. - Fixed following bug. - In some condition, 'dn is missing' error occur.Install Verify Tool: Install Verify Tool V 1.0 With Client: Use a windows service to do a remote validation work. QA can use this tool to verify daily build installation.C# Intellisense for Notepad++: 'Namespace resolution' release: Auto-Completion from "empty spot" Add missing "using" statementsOpen Source Job board: Version X3: Full version of job board, didn't have monies to fund it so it's free.DSeX DragonSpeak eXtended Editor: Version 1.0.116.0726: Cleaned up Wizard Interface Added Functionality for RTF UndoRedo IE Inserting Text from Wizard output to the Tabbed Editor Added Sanity Checks to Search/Replace Dialog to prevent crashes Fixed Template and Paste undoredo Fix Undoredo Blank spots Added New_FileTag Const = "(New FIle)" Added Filename to Modified FileClose queries (Thanks Lothus Marque)Math.NET Numerics: Math.NET Numerics v2.6.0: What's New in Math.NET Numerics 2.6 - Announcement, Explanations and Sample Code. New: Linear Curve Fitting Linear least-squares fitting (regression) to lines, polynomials and linear combinations of arbitrary functions. Multi-dimensional fitting. Also works well in F# with the F# extensions. New: Root Finding Brent's method. ~Candy Chiu, Alexander Täschner Bisection method. ~Scott Stephens, Alexander Täschner Broyden's method, for multi-dimensional functions. ~Alexander Täschner ...mojoPortal: 2.3.9.8: see release notes on mojoportal.com https://www.mojoportal.com/mojoportal-2398-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4 with either .NET 4 or ideally .NET 4.5 hosting, we will probably drop support for .NET 3.5 in the near future. The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To downl...New Projects.NET Micro Framework for STM32F4 with GCC support: The project adds GCC compiler support to .NET Micro Framework code for the STM32F4 family of ARM Cortex-based MCUs originally created by Oberon microsystems.AD Group Comparison Tool: This tool scans Active Directory for groups with matching or empty membership lists, identifying redundant groups that can possibly be eliminated.AdvGenWebRSSReader: This project is to build a portal to manage the rss feed.Best Framework: Using This framework most of .net functions that needs a lot of code writing became available almost in 1 line(s) of code. CargoOnLine: ????????cRumble Framework: C# Reporting Framework for support different technologies and a uncoupled reports.DbDataSource: DbDataSource is a ASP.NET WebForms DataSource control for simple use with EntityFramework CodeFirst.Excel Powershell Library: ExcelPSLib is a PowerShell Module that allows easy creation of XLSX file by using the EPPlus 3.1 .Net LibraryGenProj, a tool for automatic maintenance of Visual Studio .csproj files: Creates a .csproj by scanning folders for files and injecting XML into a previously created .csproj template. Uses a configuration file to control behavior.HLSLBuild: fxc integration for MSBuild and Visual Studio.HotelManagerByHuaibao: Here is a hotel management system on developing.ItemMover ..:: I Like SharePoint ::..: The ItemMover offers your users the ability to move list items between any folder from content type “Folder Content Types”. JadeTours: This is the website for JadeToursNaughty Dog Texture Viewer: A simple program that can view textures inside .pak files from games like The Last of Us and Uncharted series.Number Guessing Game: This is my version of the number guessing game. The computer will generate a number between 1-20 and the goal is to guess that number.prakark07312013Git01: *bold* _italics_ +underline+ ! Heading 1 !! Heading 2 * Bullet List ** Bullet List 2 # Number List ## Number List 2 [another wiki page] [url:http://www.example.prakark07312013Hg01: https://ajaxcontroltoolkit.codeplex.com/workitem/list/basicprakark07312013TFS01: *bold* _italics_ +underline+ ! Heading 1 !! Heading 2 * Bullet List ** Bullet List 2 # Number List ## Number List 2 [another wiki page] [url:http://www.example.PythonCode: Some python code.S1ToKindle: send s1 post to kindleSAML2: An implementation of the SAML 2 specification for .NET.SAML2.Logging.CommonLogging: A logging provider for SAML2 based on Common.Logging.SAML2.Logging.Log4Net: A logging provider for SAML2 based on Log4Net.SAML2.Profiles.DKSAML20: Extension profile for SAML2 which provides OASIS SAML 2.0 validation for the DK specification.SharePoint 2010 Export User Information to Text file, SQL server: This project will help you to export general information about uses from sharepoint 2010 to text file, which you can export into any other database like SQLtestdd07312013git: ftestdd07312013tfs: hjVarian Developers Forum: This project supports Varian collaborators and customers in their work with Varian Medical Systems public APIs.vb??????: vb??????Vinyl: Vinyl is a way to electronically keep track of your record collection.Visual Basic Web Browser: Web browser in visual basicWorkerHourManager: hour managment system,ASP,.NET,college

    Read the article

  • l2tp / ipsec debian Openswan U2.6.38 does not connect

    - by locojay
    i am trying to get ipsec/l2tp running on a debian server with an iphone as a client but always get: Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [RFC 3947] method set to=115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: ignoring Vendor ID payload [FRAGMENTATION 80000000] Dec 2 21:00:04 vpn pluto[22711]: packet from <clientip>:43598: received Vendor ID payload [Dead Peer Detection] Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: responding to Main Mode from unknown peer <clientip> Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R1: sent MR1, expecting MI2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): both are NATed Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: STATE_MAIN_R2: sent MR2, expecting MI3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: Main mode peer ID is ID_IPV4_ADDR: '10.2.210.176' Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[4] <clientip> #20: switched from "L2TP-PSK-noNAT" to "L2TP-PSK-noNAT" Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: new NAT mapping for #20, was <clientip>:43598, now <clientip>:49826 Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Dec 2 21:00:04 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: the peer proposed: <public ip>/32:17/1701 -> 10.2.210.176/32:17/0 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: responding to Quick Mode proposal {msgid:311d3282} Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: us: 171.138.2.13<171.138.2.13>:17/1701 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: them: <clientip>[10.2.210.176]:17/61719 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: Dead Peer Detection (RFC 3706): enabled Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Dec 2 21:00:05 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #21: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x05e23c9a <0x216077a9 xfrm=AES_256-HMAC_SHA1 NATOA=10.2.210.176 NATD=<clientip>:49826 DPD=enabled} Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA(0x05e23c9a) payload: deleting IPSEC State #21 Dec 2 21:00:26 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip> #20: received Delete SA payload: deleting ISAKMP State #20 Dec 2 21:00:27 vpn pluto[22711]: "L2TP-PSK-noNAT"[5] <clientip>: deleting connection "L2TP-PSK-noNAT" instance with peer <clientip> {isakmp=#0/ipsec=#0} Dec 2 21:00:27 vpn pluto[22711]: packet from <clientip>:49826: received and ignored informational message Dec 2 21:00:27 vpn pluto[22711]: ERROR: asynchronous network error report on eth0 (sport=4500) for message to <clientip> port 49826, complainant <clientip>: Connection refused [errno 111, origin ICMP type 3 code 3 (not authenticated)] my setup looks like this verizon fios actiontec -- DMZ-- ddwrt router -- debian xen instance actiontec : 192.168.1.1 ddwrt: 171.138.2.1 debian xen server: 171.138.2.13 forwarded udp 500, 4500, 1701 on ddwrt to debian xen instance. vpn passthrough is enabled /etc/ipsec.conf config setup dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12,%v4:25.0.0.0/8,%v6:fd00::/8,%v6:fe80::/10,%v4:!171.138.2.0/24,%v4:!192.168.1.0/24 protostack=netkey # Add connections here conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 # we cannot rekey for %any, let client rekey rekey=no # Apple iOS doesn't send delete notify so we need dead peer detection # to detect vanishing clients dpddelay=30 dpdtimeout=120 dpdaction=clear # Set ikelifetime and keylife to same defaults windows has ikelifetime=8h keylife=1h # l2tp-over-ipsec is transport mode type=transport # left=171.138.2.13 # # For updated Windows 2000/XP clients, # to support old clients as well, use leftprotoport=17/%any leftprotoport=17/1701 # # The remote user. # right=%any # Using the magic port of "%any" means "any one single port". This is # a work around required for Apple OSX clients that use a randomly # high port. rightprotoport=17/%any #force all to be nat'ed. because of ios conn passthrough-for-non-l2tp type=passthrough left=171.138.2.13 leftnexthop=171.138.2.1 right=0.0.0.0 rightsubnet=0.0.0.0/0 auto=route /etc/xl2tp/xl2tp.conf [global] ipsec saref = no listen-addr = 171.138.2.13 ;port = 1701 ;debug network = yes ;debug tunnel = yes ;debug network = yes ;debug packet = yes [lns default] ip range = 171.138.2.231-171.138.2.239 local ip = 171.138.2.13 assign ip = yes require chap = no refuse pap = no require authentication = no ;name = OpenswanVPN ppp debug = yes pppoptfile = /etc/ppp/options.xlt2tpd lenght bit = yes /etc/ppp/options.xl2tpd ;require-mschap-v2 pcp-accept-local ipcp-accept-local ipcp-accept-remote ;ms-dns 171.138.2.1 ms-dns 192.168.1.1 ms-dns 8.8.8.8 name l2tpd noccp auth crtscts idle 1800 mtu 1410 mru 1410 lock proxyarp connect-delay 5000 debug dump logfd 2 logfile /var/log/xl2tpd.log ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.0.0-1-amd64 (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [FAILED] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] The failed can be ignored i guess since cat /proc/sys/net/ipv4/ip_forward returns 1 any help would be much appreciated as i don't have any idea why this is not working

    Read the article

  • Websphere federated repository for Active Directory

    - by Drakiula
    Hi, What I am trying to achieve is to have Websphere 6.1 use Active Directory users authentication. Websphere is running on Windows 2008 R2. What I've done already: Succesfully setup a federated repository for Windows Active Directory (LDAP); Create a realm definition for the federated repository previously defined; Set the realm definition as the current real definition. Stop the Websphere service. When I attempt to start the Websphere service again, it crashes with the following stacktrace: ------Start of DE processing------ = [9/3/10 2:36:14:133 PDT] , key = com.ibm.websphere.security.EntryNotFoundException com.ibm.ws.security.registry.UserRegistryImpl.createCredential 824 Exception = com.ibm.websphere.security.EntryNotFoundException Source = com.ibm.ws.security.registry.UserRegistryImpl.createCredential probeid = 824 Stack Dump = com.ibm.websphere.wim.exception.EntityNotFoundException: CWWIM4001E The 'null' entity was not found. at com.ibm.ws.wim.registry.util.UniqueIdBridge.getUniqueUserId(UniqueIdBridge.java:233) at com.ibm.ws.wim.registry.WIMUserRegistry$6.run(WIMUserRegistry.java:351) at com.ibm.ws.wim.security.authz.jacc.JACCSecurityManager.runAsSuperUser(JACCSecurityManager.java:500) at com.ibm.ws.wim.security.authz.ProfileSecurityManager.runAsSuperUser(ProfileSecurityManager.java:964) at com.ibm.ws.wim.registry.WIMUserRegistry.getUniqueUserId(WIMUserRegistry.java:340) at com.ibm.ws.security.registry.UserRegistryImpl.createCredential(UserRegistryImpl.java:750) at com.ibm.ws.security.ltpa.LTPAServerObject.authenticate(LTPAServerObject.java:776) at com.ibm.ws.security.server.lm.ltpaLoginModule.login(ltpaLoginModule.java:453) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:795) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:209) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:709) at java.security.AccessController.doPrivileged(AccessController.java:246) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:706) at javax.security.auth.login.LoginContext.login(LoginContext.java:603) at com.ibm.ws.security.auth.JaasLoginHelper.jaas_login(JaasLoginHelper.java:376) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3513) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3306) at com.ibm.ws.security.auth.ContextManagerImpl.login(ContextManagerImpl.java:3086) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:2180) at com.ibm.ws.security.auth.ContextManagerImpl.getServerSubjectInternal(ContextManagerImpl.java:1972) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2530) at com.ibm.ws.security.auth.ContextManagerImpl.initialize(ContextManagerImpl.java:2560) at com.ibm.ws.security.core.SecurityContext.enable(SecurityContext.java:83) at com.ibm.ws.security.core.distSecurityComponentImpl.initialize(distSecurityComponentImpl.java:379) at com.ibm.ws.security.core.distSecurityComponentImpl.startSecurity(distSecurityComponentImpl.java:336) at com.ibm.ws.security.core.SecurityComponentImpl.startSecurity(SecurityComponentImpl.java:105) at com.ibm.ws.security.core.ServerSecurityComponentImpl.start(ServerSecurityComponentImpl.java:283) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ApplicationServerImpl.start(ApplicationServerImpl.java:197) at com.ibm.ws.runtime.component.ContainerImpl.startComponents(ContainerImpl.java:977) at com.ibm.ws.runtime.component.ContainerImpl.start(ContainerImpl.java:673) at com.ibm.ws.runtime.component.ServerImpl.start(ServerImpl.java:526) at com.ibm.ws.runtime.WsServerImpl.bootServerContainer(WsServerImpl.java:192) at com.ibm.ws.runtime.WsServerImpl.start(WsServerImpl.java:140) at com.ibm.ws.runtime.WsServerImpl.main(WsServerImpl.java:461) at com.ibm.ws.runtime.WsServer.main(WsServer.java:59) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at com.ibm.wsspi.bootstrap.WSLauncher.launchMain(WSLauncher.java:183) at com.ibm.wsspi.bootstrap.WSLauncher.main(WSLauncher.java:90) at com.ibm.wsspi.bootstrap.WSLauncher.run(WSLauncher.java:72) at org.eclipse.core.internal.runtime.PlatformActivator$1.run(PlatformActivator.java:78) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:92) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:68) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:400) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:177) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:79) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:618) at org.eclipse.core.launcher.Main.invokeFramework(Main.java:336) at org.eclipse.core.launcher.Main.basicRun(Main.java:280) at org.eclipse.core.launcher.Main.run(Main.java:977) at com.ibm.wsspi.bootstrap.WSPreLauncher.launchEclipse(WSPreLauncher.java:329) at com.ibm.wsspi.bootstrap.WSPreLauncher.main(WSPreLauncher.java:92) Dump of callerThis = Object type = com.ibm.ws.security.registry.UserRegistryImpl com.ibm.ws.security.registry.UserRegistryImpl@68a068a0 Anybody maybe has a hint on this? I followed the exact steps described in the IBM Infocenter for setting this up. Thanks in advance for the help.

    Read the article

  • Error installing pkgconfig via macports

    - by Greg K
    I installed Macports 1.8.2 from a DMG. That seemed to install fine. I ran sudo port selfupdate to make sure my ports tree was current. I then tried to install bindfs as I want to mount some directories in my OS X file system (like you can do with mount --bind in linux). pkgconfig and macfuse are two dependencies of bindfs. I had trouble installing bindfs due to errors installing pkgconfig, so I tried to just install pkgconfig, here's the debug output from sudo port install pkgconfig: $ sudo port -d install pkgconfig DEBUG: Found port in file:///opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: Changing to port directory: /opt/local/var/macports/sources/rsync.macports.org/release/ports/devel/pkgconfig DEBUG: OS Platform: darwin DEBUG: OS Version: 10.3.0 DEBUG: Mac OS X Version: 10.6 DEBUG: System Arch: i386 DEBUG: setting option os.universal_supported to yes DEBUG: org.macports.load registered provides 'load', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.unload registered provides 'unload', a pre-existing procedure. Target override will not be provided DEBUG: org.macports.distfiles registered provides 'distfiles', a pre-existing procedure. Target override will not be provided DEBUG: adding the default universal variant DEBUG: Reading variant descriptions from /opt/local/var/macports/sources/rsync.macports.org/release/ports/_resources/port1.0/variant_descriptions.conf DEBUG: Requested variant darwin is not provided by port pkgconfig. DEBUG: Requested variant i386 is not provided by port pkgconfig. DEBUG: Requested variant macosx is not provided by port pkgconfig. ---> Computing dependencies for pkgconfig DEBUG: Executing org.macports.main (pkgconfig) DEBUG: Skipping completed org.macports.fetch (pkgconfig) DEBUG: Skipping completed org.macports.checksum (pkgconfig) DEBUG: Skipping completed org.macports.extract (pkgconfig) DEBUG: Skipping completed org.macports.patch (pkgconfig) ---> Configuring pkgconfig DEBUG: Using compiler 'Mac OS X gcc 4.2' DEBUG: Executing org.macports.configure (pkgconfig) DEBUG: Environment: CFLAGS='-O2 -arch x86_64' CPPFLAGS='-I/opt/local/include' CXXFLAGS='-O2 -arch x86_64' MACOSX_DEPLOYMENT_TARGET='10.6' CXX='/usr/bin/g++-4.2' F90FLAGS='-O2 -m64' LDFLAGS='-L/opt/local/lib' OBJC='/usr/bin/gcc-4.2' FCFLAGS='-O2 -m64' INSTALL='/usr/bin/install -c' OBJCFLAGS='-O2 -arch x86_64' FFLAGS='-O2 -m64' CC='/usr/bin/gcc-4.2' DEBUG: Assembled command: 'cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig' checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking whether to enable maintainer-specific portions of Makefiles... no checking build system type... i386-apple-darwin10.3.0 checking host system type... i386-apple-darwin10.3.0 checking for style of include used by make... none checking for gcc... /usr/bin/gcc-4.2 checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 DEBUG: Backtrace: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_pkgconfig/work/pkg-config-0.23" && ./configure --prefix=/opt/local --enable-indirect-deps --with-pc-path=/opt/local/lib/pkgconfig:/opt/local/share/pkgconfig " returned error 77 while executing "$procedure $targetname" Warning: the following items did not execute (for pkgconfig): org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. I have only recently installed Xcode 3.2.2 (prior to installing macports). Am I right in thinking this the issue here: configure: error: C compiler cannot create executables

    Read the article

  • Squid refresh_pattern won't cache "Expires: ..."

    - by Marcelo Cantos
    Background I frequent the OpenGL ES documentation site at http://www.khronos.org/opengles/sdk/1.1/docs/man/. Even though the content is completely static, it seems to force a reload on every single page I visit, which is very annoying. I have a squid 3.0 proxy set up (apt-get install squid3 on Ubuntu 10.04), and I added a refresh_pattern to force the pages to cache: refresh_pattern ^http://www.khronos.org/opengles/sdk/1\.1/docs/man/ … 1440 20% 10080 … override-expire ignore-reload ignore-no-cache ignore-private ignore-no-store This is all on one line, of course. While this appears to work for the XHTML documents (e.g., glBindTexture), it fails to cache the linked content, such as the DTD, some .ent files (?) and some XSL files. The delay in fetching these extra files delays rendering of the main document, so my principal annoyance isn't fixed. The only difference I can glean with these ancillary files is that they come with an Expires: header set to the current time, whereas the XHTML document has none. But I would have expected the override-expire option to fix this. I have confirmed that documents have the same base URL. I have also truncated the pattern to varying degrees, with no effect. My questions Why does the override-expire option not seem to work? Is there a simple way to tell squid to unconditionally cache a document, no matter what it finds in the response headers? (Hopefully) relevant output cache.log Jan 01 10:33:30 1970/06/25 21:18:27| Processing Configuration File: /etc/squid3/squid.conf (depth 0) Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'override-expire' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-reload' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-cache' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-no-store' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| WARNING: use of 'ignore-private' in 'refresh_pattern' violates HTTP Jan 01 10:33:30 1970/06/25 21:18:27| DNS Socket created at 0.0.0.0, port 37082, FD 10 Jan 01 10:33:30 1970/06/25 21:18:27| Adding nameserver 192.168.1.1 from /etc/resolv.conf Jan 01 10:33:30 1970/06/25 21:18:27| Accepting HTTP connections at 0.0.0.0, port 3128, FD 11. Jan 01 10:33:30 1970/06/25 21:18:27| Accepting ICP messages at 0.0.0.0, port 3130, FD 13. Jan 01 10:33:30 1970/06/25 21:18:27| HTCP Disabled. Jan 01 10:33:30 1970/06/25 21:18:27| Loaded Icons. Jan 01 10:33:30 1970/06/25 21:18:27| Ready to serve requests. access.log Jun 25 21:19:35 2010.710 0 192.168.1.50 TCP_MEM_HIT/200 2452 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/glBindTexture.xml - NONE/- text/xml Jun 25 21:19:36 2010.263 543 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.276 556 192.168.1.50 TCP_MISS/304 370 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.666 278 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent - DIRECT/74.54.224.215 - Jun 25 21:19:36 2010.958 279 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.251 276 192.168.1.50 TCP_MISS/304 322 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent - DIRECT/74.54.224.215 - Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl - NONE/- text/xml Jun 25 21:19:37 2010.332 0 192.168.1.50 TCP_IMS_HIT/304 316 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl - NONE/- text/xml store.log Jun 25 21:19:36 2010.263 RELEASE -1 FFFFFFFF D3056C09B42659631A65A08F97794E45 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml1-transitional.dtd Jun 25 21:19:36 2010.276 RELEASE -1 FFFFFFFF 9BF7F37442FD84DD0AC0479E38329E3C 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/mathml.xsl Jun 25 21:19:36 2010.666 RELEASE -1 FFFFFFFF 7BCFCE88EC91578C8E2589CB6310B3A1 304 1277464776 -1 1277464776 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-lat1.ent Jun 25 21:19:36 2010.958 RELEASE -1 FFFFFFFF ECF1B24E437CFAA08A2785AA31A042A0 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-symbol.ent Jun 25 21:19:37 2010.251 RELEASE -1 FFFFFFFF 36FE3D76C80F0106E6E9F3B7DCE924FA 304 1277464777 -1 1277464777 unknown -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/xhtml-special.ent Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF A33E5A5CCA2BFA059C0FA25163485192 304 1277462871 1221139523 1277462871 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/ctop.xsl Jun 25 21:19:37 2010.332 RELEASE -1 FFFFFFFF E2CF8854443275755915346052ACE14E 304 1277462872 1221139523 1277462872 text/xml -1/0 GET http://www.khronos.org/opengles/sdk/1.1/docs/man/pmathml.xsl

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >