Search Results

Search found 24458 results on 979 pages for 'message procedure'.

Page 115/979 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Customizing JBar for Notifications

    - by Ryan Ohs
    Lately I've been using JBar, a very neat jQuery plugin for displaying notifications in my web applications. Unfortunately the original version of JBar only supports binding to the click event of a DOM item. In order to get around this limitation I have modified the source code and posted an updated version on my GitHub account here. The modified version allows you to display a JBar notification by calling a method. I typically use it to display succes or failure messages when doing Ajax calls. I have also included some additional CSS and JS so that you can diplay different styles of notifications. showNotification(message) shows a green "success" message. showWarning(message) shows an orange "warning" message. showMessage(message, className) allows you to specify a custom class to apply to the notification for additional styling purposes. A web page with samples is included.   Get the code here.

    Read the article

  • BizTalk: Sample: Context routing and Throttling with orchestration

    - by Leonid Ganeline
    The sample demonstrates using orchestration for throttling and using context routing. Usually throttling is implemented on the host level (in BizTalk 2010 we can also using the host instance level throttling). Here is demonstrated the throttling with orchestration convoy that slows down message flow from some customers. Sample implements sort of quality service agreement layer for different kind of customers. The sample demonstrates the context routing between orchestrations. It has several advantages over the content routing. For example, we don’t have to create the property schema and promote properties on the schemas; we don’t have to change the message content to change routing. Use case:  The BizTalk application has a main processing orchestration that process all input messages. The application usually works as an OLTP application. Input messages came in random order without peaks, typical scenario for the on-line users. But sometimes the big data batch payloads come. These batches overload processing orchestrations. All processes, activated by on-line users after the payload, come to the same queue and are processed only after the payload. Result is on-line users can see significant delay in processing. It can be minutes or hours, depending of the batch size. Requirements: On-line user’s processing should work without delays. Big batches cannot disturb on-line users. There should be higher priority for the on-line users and the lower priority for the batches. Design: Decision is to divide the message flow in two branches, one for on-line users and second for batches. Branch with batches provides messages to the processing line with low priority, and the on-line user’s branch – with high priority. All messages are provided by hi-speed receive port. BTS.ReceivePortName context property is used for routing. The Router orchestration separates messages sent from on-line users and from the batch messages. But the Router does not use the BizTalk provided value of this property, the Router set up this value by itself. Router uses the content of the messages to decide if it is from on-line users or from batches. The message context property the BTS.ReceivePortName is changed respectively, its value works as a recipient address, as the “To” address for the next recipient orchestrations. Those next orchestrations are the BatchBottleneck and the MainProcess orchestrations. Messages with context equal “ToBatch” are filtered up by the BatchBottleneck orchestration. It is a unified convoy orchestration and it throttles the message flow, delaying the message delivery to the MainProcess orchestration. The BatchBottleneck orchestration changes the message context to the “ToProcess” and sends messages one after another with small delay in between. Delay can be configured in the BizTalk config file as:                 <appSettings>                                 <add key="GLD_Tests_TwoWayRouting_BatchBottleneck_DelayMillisec" value="100"/>                 </appSettings>   Of course, messages with context equal “ToProcess” are filtered up by the MainProcess orchestration.   NOTES: Filters with string values: In Orchestrations (the first Receive shape in orchestration) use string values WITH quotes; in Send Ports use string values WITHOUT quotes. Filters on the Send Ports are dynamic; we can change them in run-time. Filters on the Orchestrations are static; we can change them only in design-time. To check the existence of the promoted property inside orchestration use the Expression shape with construction like this:       if (BTS.ReceivePortName exists myMessage) { …; } It is not possible in the Message Assignment shape because using the “if” statement inside Message Assignment is prohibited. Several predefined context properties can behave in specific way. Say MessageTracking.OriginatingMessage or XMLNORM.DocumentSpecName, they are required some internal rules should be applied to the format or usage of this properties. MessageTracking.* parameters require you have to use tracking and you can get unexpected run-time errors in some cases. My recommendation is - use very limited set of the predefined context properties. To “attach” the new promoted property to the message, we have to use correlation. The correlation type should include this property. [Here is a good explanation by Saravana ] The sample code is here [sorry, temporary trubles with CodePlex].

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #048

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Order of Result Set of SELECT Statement on Clustered Indexed Table When ORDER BY is Not Used Above theory is true in most of the cases. However SQL Server does not use that logic when returning the resultset. SQL Server always returns the resultset which it can return fastest.In most of the cases the resultset which can be returned fastest is the resultset which is returned using clustered index. Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT One of the Jr. Developer asked me this question (What will be the Effect of TRANSACTION on Local Variable – After ROLLBACK and After COMMIT?) while I was rushing to an important meeting. I was getting late so I asked him to talk with his Application Tech Lead. When I came back from meeting both of them were looking for me. They said they are confused. I quickly wrote down following example for them. 2008 SQL SERVER – Guidelines and Coding Standards Complete List Download Coding standards and guidelines are very important for any developer on the path of a successful career. A coding standard is a set of guidelines, rules and regulations on how to write code. Coding standards should be flexible enough or should take care of the situation where they should not prevent best practices for coding. They are basically the guidelines that one should follow for better understanding. Download Guidelines and Coding Standards complete List Download Get Answer in Float When Dividing of Two Integer Many times we have requirements of some calculations amongst different fields in Tables. One of the software developers here was trying to calculate some fields having integer values and divide it which gave incorrect results in integer where accurate results including decimals was expected. Puzzle – Computed Columns Datatype Explanation SQL Server automatically does a cast to the data type having the highest precedence. So the result of INT and INT will be INT, but INT and FLOAT will be FLOAT because FLOAT has a higher precedence. If you want a different data type, you need to do an EXPLICIT cast. Renaming SP is Not Good Idea – Renaming Stored Procedure Does Not Update sys.procedures I have written many articles about renaming a tables, columns and procedures SQL SERVER – How to Rename a Column Name or Table Name, here I found something interesting about renaming the stored procedures and felt like sharing it with you all. The interesting fact is that when we rename a stored procedure using SP_Rename command, the Stored Procedure is successfully renamed. But when we try to test the procedure using sp_helptext, the procedure will be having the old name instead of new names. 2009 Insert Values of Stored Procedure in Table – Use Table Valued Function It is clear from the result set that , where I have converted stored procedure logic into the table valued function, is much better in terms of logic as it saves a large number of operations. However, this option should be used carefully. The performance of the stored procedure is “usually” better than that of functions. Interesting Observation – Index on Index View Used in Similar Query Recently, I was working on an optimization project for one of the largest organizations. While working on one of the queries, we came across a very interesting observation. We found that there was a query on the base table and when the query was run, it used the index, which did not exist in the base table. On careful examination, we found that the query was using the index that was on another view. This was very interesting as I have personally never experienced a scenario like this. In simple words, “Query on the base table can use the index created on the indexed view of the same base table.” Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries Working with SQL Server has never seemed to be monotonous – no matter how long one has worked with it. Quite often, I come across some excellent comments that I feel like acknowledging them as blog posts. Recently, I wrote an article on SQL SERVER – Execution Plan and Results of Aggregate Concatenation Queries Depend Upon Expression Location, which is well received in the community. 2010 I encourage all of you to go through complete series and write your own on the subject. If you write an article and send it to me, I will publish it on this blog with due credit to you. If you write on your own blog, I will update this blog post pointing to your blog post. SQL SERVER – ORDER BY Does Not Work – Limitation of the View 1 SQL SERVER – Adding Column is Expensive by Joining Table Outside View – Limitation of the View 2 SQL SERVER – Index Created on View not Used Often – Limitation of the View 3 SQL SERVER – SELECT * and Adding Column Issue in View – Limitation of the View 4 SQL SERVER – COUNT(*) Not Allowed but COUNT_BIG(*) Allowed – Limitation of the View 5 SQL SERVER – UNION Not Allowed but OR Allowed in Index View – Limitation of the View 6 SQL SERVER – Cross Database Queries Not Allowed in Indexed View – Limitation of the View 7 SQL SERVER – Outer Join Not Allowed in Indexed Views – Limitation of the View 8 SQL SERVER – SELF JOIN Not Allowed in Indexed View – Limitation of the View 9 SQL SERVER – Keywords View Definition Must Not Contain for Indexed View – Limitation of the View 10 SQL SERVER – View Over the View Not Possible with Index View – Limitations of the View 11 2011 Startup Parameters Easy to Configure If you are a regular reader of this blog, you must be aware that I have written about SQL Server Denali recently. Here is the quickest way to reach into the screen where we can change the startup parameters. Go to SQL Server Configuration Manager >> SQL Server Services >> Right Click on the Server >> Properties >> Startup Parameters 2012 Validating Unique Columnname Across Whole Database I sometimes come across very strange requirements and often I do not receive a proper explanation of the same. Here is the one of those examples. For example “Our business requirement is when we add new column we want it unique across current database.” Read the solution to this strange request in this blog post. Excel Losing Decimal Values When Value Pasted from SSMS ResultSet It is very common when users are coping the resultset to Excel, the floating point or decimals are missed. The solution is very much simple and it requires a small adjustment in the Excel. By default Excel is very smart and when it detects the value which is getting pasted is numeric it changes the column format to accommodate that. Basic Calculation and PEMDAS Order of Operation Read this interesting blog post for fantastic conversation about the subject. Copy Column Headers from Resultset – SQL in Sixty Seconds #027 – Video http://www.youtube.com/watch?v=x_-3tLqTRv0 Delete From Multiple Table – Update Multiple Table in Single Statement There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? Read the answer in the blog post. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Db2 error : SQL0901N, SQLSTATE=58004

    - by Harv
    Hi, Can i use Atomic in the parrent procedure as well as in the procedure which the mail procedure calls. My procedure compiles perfectly, but sometimes when I execute it. I receive following error: DB2 Database Error: ERROR [58004] [IBM][DB2/NT64] SQL0901N The SQL statement failed because of a non-severe system error. Subsequent SQL statements can be processed. (Reason "Sdir len bad: 1542!=1520+14".) SQLSTATE=58004 However, surprisingly when i commented the "ATOMIC" keyword in the main procedure and ran again it ran perfectly. But when I ran it again after uncommenting it still did not give any errors and ran perfectly. So the error that I recieve is not something that I recive everytime.Could some one please let me know what could be the issue and what needs to be done to resolve this. As on goolgeing I did not find any leads on this. Thanks, Harveer

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • MySql order by problem

    - by Sergio
    Hello. I want to list messages that received specific user from other users group by ID's and ordered by last message received. If I use this query: SELECT MAX(id), fromid, toid, message FROM pro_messages WHERE toid=00003 GROUP BY fromid I do not get last message sent from user "fromid" to user "toid" but the first message sent. Can I do that in some other way or I need to do it with two queries or join tables? id - message id fromid - id of user who sent message toid - id of user who receive message (in this case user 00003)

    Read the article

  • Problem with a test method in Yii web services

    - by Conrad
    Hi There, Is there anyone here who might be familiar with web services in the yii framework? I declared the following test method: /** * Send a single SMS message * * @param string $username Username * @param string $password Password * @param string $identifier Valid Identifier to use * @param string $mobileNumber Mobile Number to send message to * @param string $message Message to send * @return string 'OK' on success, error message on failure * @soap */ public function singleSms($username, $password, $identifier,$mobileNumber, $message){ return "username=$username, pwd=$password, source=$identifier, mobno=$mobileNumber, msg=$message"; } But when I try to call this method I get the following response: - - WSDL - SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://sms.chillnethosting.co.za/index.php?r=sms/webservice' : Start tag expected, '<' not found The WSDL generates when I call my URL: Web Service URL Any Ideas?

    Read the article

  • How to get MAX value of a version-number (varchar) column in T-SQL

    - by Ogre Psalm33
    I have a table defined like this: Column: Version Message Type: varchar(20) varchar(100) ---------------------------------- Row 1: 2.2.6 Message 1 Row 2: 2.2.7 Message 2 Row 3: 2.2.12 Message 3 Row 4: 2.3.9 Message 4 Row 5: 2.3.15 Message 5 I want to write a T-Sql query that will get message for the MAX version number, where the "Version" column represents a software version number. I.e., 2.2.12 is greater than 2.2.7, and 2.3.15 is greater than 2.3.9, etc. Unfortunately, I can't think of an easy way to do that without using CHARINDEX or some complicated other split-like logic. Running this query: SELECT MAX(Version) FROM my_table will yield the erroneous result: 2.3.9 When it should really be 2.3.15. Any bright ideas that don't get too complex?

    Read the article

  • Database Version Control SQL Server 2008 Drop SP's and Functions

    - by Lieven Cardoen
    I'm working on versioning our database and now searching for a way to drop all stored procedures and functions from a C# Console Application. I'd rather not create a stored procedure that drops all stored procedures and functions. I has to be some sql executed from C#. I tried to drop the stored procedure before creating it, but I get this message: System.Data.SqlClient.SqlException: 'CREATE/ALTER PROCEDURE' must be the first statement in a query batch. Script for one SP for example: DROP PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] SET ANSI_NULLS ON SET QUOTED_IDENTIFIER ON CREATE PROCEDURE [dbo].[sp_Economatic_LoadJournalEntryFeedbackByData] @Data VARCHAR(MAX) AS BEGIN ... END So I guess before creating all SP's and functions I'll need to drop all SP's and functions first with one sql script.

    Read the article

  • Unable to retrieve the complete description string of the event log record

    - by Santosh Pillai
    Hi All, I have an MFC application that reads and displays event log records using the ::ReadEventLog() API. The problem is with reading the "Description" message string of the event log record. The MFC application is unable to read the complete "Description" message string and displays only some part of it. However the Windows System Event Log Viewer reads and displays the complete "Description" message string correctly. I have ensured that my MFC application reads the entire "Description" message string by retrieving all the strings as provided by the "NumStrings" and "StringOffset" member variables of the EVENTLOGRECORD structure and merging all of them. Also as mentioned in MSDN my application loads the Source Name specific message library file (whose path is specified in the registry at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EventLog\Application[SourceName]) that further contains additional message string information and merges it with the earlier read strings. I am still unable to get the entire "Description" message string. Please provide any help towards resolving the issue. Regards, Santosh.

    Read the article

  • Source Control and SQL Development &ndash; Part 3

    - by Ajarn Mark Caldwell
    In parts one and two of this series, I have been specifically focusing on the latest version of SQL Source Control by Red Gate Software.  But I have been doing source-controlled SQL development for years, long before this product was available, and well before Microsoft came out with Database Projects for Visual Studio.  “So, how does that work?” you may wonder.  Well, let me share some of the details of how we do it where I work… The key to this approach is that everything is done via Transact-SQL script files; either natively written T-SQL, or generated.  My preference is to write all my code by hand, which forces you to become better at your SQL syntax.  But if you really prefer to use the Management Studio GUI to make database changes, you can still do that, and then you use the Generate Scripts feature of the GUI to produce T-SQL scripts afterwards, and store those in your source control system.  You can generate scripts for things like stored procedures and views by right-clicking on the database in the Object Explorer, and Choosing Tasks, Generate Scripts (see figure 1 to the left).  You can also do that for the CREATE scripts for tables, but that does not work when you have a table that is already in production, and you need to make just a simple change, such as adding a new column or index.  In this case, you can use the GUI to make the table changes, and then instead of clicking the Save button, click the Generate Change Script button (). Then, once you have saved the change script, go ahead and execute it on your development database to actually make the change.  I believe that it is important to actually execute the script rather than just click the Save button because this is your first test that your change script is working and you didn’t somehow lose a portion of the change. As you can imagine, all this generating of scripts can get tedious and tempting to skip entirely, so again, I would encourage you to just get in the habit of writing your own Transact-SQL code, and then it is just a matter of remembering to save your work, just like you are in the habit of saving changes to a Word or Excel document before you exit the program. So, now that you have all of these script files, what do you do with them?  Well, we organize ours into folders labeled ChangeScripts, Functions, Views, and StoredProcedures, and those folders are loaded into our source control system.  ChangeScripts contains all of the table and index changes, and anything else that is basically a one-time-only execution.  Of course you want to write your scripts with qualifying logic so that if a script were accidentally run more than once in a database, it would not crash nor corrupt anything; but these scripts are really intended to be run only once in a database. Once you have your initial set of scripts loaded into source control, then making changes, such as altering a stored procedure becomes a simple matter of checking out your CREATE PROCEDURE* script, editing it in SSMS, saving the change, executing the script in order to effect the change in your database, and then checking the script back in to source control.  Of course, this is where the lack of integration for source control systems within SSMS becomes an irritation, because this means that in addition to SSMS, I also have my source control client application running to do the check-out and check-in.  And when you have 800+ procedures like we do, that can be quite tedious to locate the procedure I want to change in source control, check it out, then locate the script file in my working folder, open it in SSMS, do the change, save it, and the go back to source control to check in.  Granted, it is not nearly as burdensome as, say, losing your source code and having to rebuild it from memory, or losing the audit trail that good source control systems provide.  It is worth the effort, and this is how I have been doing development for the last several years. Remember that everything that the SQL Server Management Studio does in modifying your database can also be done in plain Transact-SQL code, and this is what you are storing.  And now I have shown you how you can do it all without spending any extra money.  You already have source control, or can get free, open-source source control systems (almost seems like an oxymoron, doesn’t it) and of course Management Studio is free with your SQL Server database engine software. So, whether you spend the money on tools to make it easier, or not, you now have no excuse for not using source control with your SQL development. * In our current model, the scripts for stored procedures and similar database objects are written with an IF EXISTS…DROP… at the top, followed by the CREATE PROCEDURE… section, and that followed by a section that assigns permissions.  This allows me to run the same script regardless of whether the procedure previously existed in the database.  If the script was only an ALTER PROCEDURE, then it would fail the first time that procedure was deployed to a database, unless you wrote other code to stub it if it did not exist.  There are a few different ways you could organize your scripts for deployment, each with its own trade-offs, but I think it is absolutely critical that whichever way you organize things, you ensure that the same script is run throughout the deployment cycle, and do not allow customizations to creep in between TEST and PROD.  If you do, then you have broken the integrity of your deployment process because what you deployed to PROD was not exactly the same as what was tested in TEST, so you effectively have now released untested code into PROD.

    Read the article

  • Open Source Queuing Solutions for peek, mark as done and then remove

    - by user330114
    I am looking at open source queuing platforms that allow me do the following: I have multiple producers, multiple consumers putting data into a queue in a multithreaded environment with the specific use case: I want the ability for consumers to be able do the following Peek at a message from the queue(which should mark as the message as invisible on the queue so that other consumers cannot consume the same message) The consumer works on the message consumed and if it is able to do the work successfully, it marks the message as consumed which should permanently delete it from the queue. If the consumer dies abruptly after marking the message as consumed or fails to acknowledge successful consumption after a certain timeout, the message is made visible on the queue again so that another consumer can pick it up. I've been looking at RabbitMQ, hornetQ, ActiveMQ but I'm not sure I can get this functionality out of the box, any recommendations on a system that gives me this functionality?

    Read the article

  • Pulling data from a text file to generate a report

    - by Edmond
    Have a program in Access, using VBA. I need to come up with an If statement to pull data from a text file. The data is a list of procedures and prices. I have to pull the prices from the text file to show in a report how much each procedure costs. ID PID M1 M2 M3 Total 1 11120390(procedure) 2 180(price) 360 180 540 1080(total Price) 3 2 1 3 6(Units sold) 4 5 200(Price) 200 600 800 1600(total price) 6 1 3 4 8(Units Sold) 7 11120390(procedure) The table in the text file is setup like this and I need to Pull the procedure number and the price of each procedure from the text file.

    Read the article

  • Why does this break statement break not work?

    - by Roman
    I have the following code: public void post(String message) { final String mess = message; (new Thread() { public void run() { while (true) { try { if (status.equals("serviceResolved")) { output.println(mess); Game.log.fine("The following message was successfully sent: " + mess); break; } else { try {Thread.sleep(1000);} catch (InterruptedException ie) {} } } catch (NullPointerException e) { try {Thread.sleep(1000);} catch (InterruptedException ie) {} } } } }).start(); } In my log file I find a lot of lines like this: The following message was successfully sent: blablabla The following message was successfully sent: blablabla The following message was successfully sent: blablabla The following message was successfully sent: blablabla And my program is not responding. It seems to me that the break command does not work. What can be a possible reason for that. The interesting thing is that it happens not all the time. Sometimes my program works fine, sometimes the above described problem happens.

    Read the article

  • DeprecationWarning when pushing to Mercurial repo

    - by Josh Nankin
    I'm trying to serve a merurial repository with apache, and when I try to push to the repo I see this in the apache error.log. On the client side I get a 500 error. How do I get this to go away???? [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/common.py:24: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] self.message = message [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:104: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] if not inst.message: [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] /var/lib/python-support/python2.6/mercurial/hgweb/hgweb_mod.py:106: DeprecationWarning: BaseException.message has been deprecated as of Python 2.6 [Sun Jun 06 14:43:25 2010] [error] [client 192.168.1.8] return '0\\n%s\\n' % inst.message,

    Read the article

  • Getting data from function loaded after current function

    - by Hwang
    I have 2 functions, 1 loaded before another. Some value are determine by the other function data, but since one of it has to load before the other 1, how should I get the data that is loaded after current function? private function wMessage():void { Message.width=Name.width+20; } private function wName():void { Name.x=(Message.x+Message.textWidth)-Name.textWidth; Name.y=Message.y+Message.height; } I've taken out some other unnecessary codes, but as you can see Name position is set according by the position + width of Message, but I want Message's width to be not smaller than Name

    Read the article

  • Cannot find suitable formatter for custom class object

    - by Ganesha87
    I'm writing messages to a Message Queue in C# as follows: ObjectMsg objMsg = new ObjMsg(1,"ascii",20090807); Message m = new Message(); m.Formatter = new BinaryMessageFormatter(); m.body = objMsg; queue.Send(m); and I'm trying to read the messages as follows: Message m = new Message() m.Formatter = new BinaryMessageFormatter(); MessageQueue mq = new MessageQueue("./pqueue"); m = mq.Recieve(); ObjMsg msg = (ObjMsg )m.Body; However I'm getting an error message which says: "Cannot find a formatter capable of reading this message."

    Read the article

  • YII Mail Generate unwanted ascii character in HTML mail

    - by CedSha
    I use YII-Mail just by copying the sample but I always get some ascii charcters in my generated links Where they come from and how to avoid them ? $message = new YiiMailMessage; $message->view = 'mail'; $message->setBody(array('model'=>$model), 'text/html'); $message->subject = Yii::t('tr','my subject'); $message->addTo('[email protected]'); $message->from = '[email protected]'; Yii::app()->mail->send($message); and in view file 'mail' <h1><?php echo(Yii::t('tr','This is HTML mail')); ?></h1> <?php echo CHtml::link('Mylink', array('controller/view', 'id'=>$model->id)); ?> The resulted email source looks like this <h1>This is HTML mail</h1> <a href=3D"/testdrive/index.php?r=3D ....

    Read the article

  • Doctrine: textarea line breaks & nl2br

    - by Tom
    Hi, I'm pulling my hair out with something that should be very simple: getting line breaks to show up properly in text that's returned from the database with Doctrine 1.2 I'm saving a message: $body = [text from a form textarea]; $m = new Message(); $m->setSubject($subject); $m->setBody($body); $m->save(); Querying the message: $q = Doctrine_Query::create() ->from('Message m') ->where('m.message_id = ?', $id) ->limit(1); $this->message = $q->execute(array(), Doctrine_Core::HYDRATE_ARRAY); In my template: echo $message[0]['body'] ... outputs the text without line breaks echo ln2br($message[0]['body']) ... no difference ... and I've tried every combination I could think of. Is Doctrine doing something to line breaks that's affecting this, or is there something that I'm just missing? Any help would be appreciated. Thanks.

    Read the article

  • Order of declaration in an anonymous pl/sql block

    - by RenderIn
    I have an anonymous pl/sql block with a procedure declared inside of it as well as a cursor. If I declare the procedure before the cursor it fails. Is there a requirement that cursors be declared prior to procedures? What other rules are there for order of declaration in a pl/sql block? This works: DECLARE cursor cur is select 1 from dual; procedure foo as begin null; end foo; BEGIN null; END; This fails with error PLS-00103: Encountered the symbol "CURSOR" when expecting one of the following: begin function package pragma procedure form DECLARE procedure foo as begin null; end foo; cursor cur is select 1 from dual; BEGIN null; END;

    Read the article

  • Bring opera window to front!

    - by serhiyiv
    Hi! Could you please help me to figure out how to bring Opera's window to front, using class name?! I use the following procedure to bring other applications to front and it works fine. I need to use only a class name and not window's caption. If I use window caption instead, the procedure works. Here is the procedure: procedure SwitchToThisWindow(h1: hWnd; x: bool); stdcall; external user32 Name 'SwitchToThisWindow'; procedure Opera; var Wnd:HWND; begin Wnd:= FindWindow(PChar('OpWindow'),nil); if (Wnd < 0) then SwitchToThisWindow(Wnd, True) ; ////////////////////////////////////////////////////

    Read the article

  • Is it possible to access JSON properties with relative syntax when using JSON defined functions?

    - by Justin Vincent
    // JavaScript JSON var myCode = { message : "Hello World", helloWorld : function() { alert(this.message); } }; myCode.helloWorld(); The above JavaScript code will alert 'undefined'. To make it work for real the code would need to look like the following... (note the literal path to myCode.message) // JavaScript JSON var myCode = { message : "Hello World", helloWorld : function() { alert(myCode.message); } }; myCode.helloWorld(); My question is... if I declare functions using json in this way, is there some "relative" way to get access to myCode.message or is it only possible to do so using the literal namespace path myCode.message?

    Read the article

  • How can I ensure that nested transactions are committed independently of each other?

    - by Caldera
    If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others? In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)? I have a stored procedure defined something like this in SQL Server 2000: CREATE PROCEDURE toplevel_proc .. AS BEGIN ... while @row_count <= @max_rows begin select @parameter ... where rownum = @row_count exec nested_proc @parameter select @row_count = @row_count + 1 end END

    Read the article

  • dynamic SQL not working as expected

    - by christine33990
    create or replace procedure createtables Authid current_user as begin execute immediate 'create table newcustomer as select * from customer'; end; create or replace procedure e is begin createtables; select * from newcustomer; end; I got two procedures above. first one will create a new tables called newcustomer, second procedure will call the first procedure and query to the newcustomer table. when I try to compile this code, it says the table is not yet created, I don't really get it as I have called createtables procedure so I assume I have created the table. Any help will be appreciated. Thanks

    Read the article

  • SSRS Performance Mystery

    - by user101654
    I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >