Search Results

Search found 22893 results on 916 pages for 'message queue'.

Page 4/916 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Thread safe lockfree mutual ByteArray queue

    - by user313421
    A byte stream should be transferred and there is one producer thread and a consumer one. Speed of producer is higher than consumer most of the time, and I need enough buffered data for QoS of my application. I read about my problem and there are solutions like shared buffer, PipeStream .NET class ... This class is going to be instantiated many times on server so I need and optimized solution. Is it good idea to use a Queue of ByteArray ? If yes, I'll use an optimization algorithm to guess the Queue size and each ByteArray capacity and theoretically it fits my case. If no, I what's the best approach ? Please let me know if there's a good lock free thread safe implementation of ByteArray Queue in C# or VB. Thanks in advance

    Read the article

  • Best Work Queue service for distributed clusters

    - by onewheelgood
    Hi there. I require a simple work queue type system for asynchronous task management. I have looked at both beanstalkd and gearman. However, both these seem to assume that the client and the queue server are on the same network, and therefore that there will always be a reliable network between them. I need one that can support the client and server being in different places in the world, and be able to manage temporary loss of network connection between clusters. Ideally, this would work in such a way where I post a job to a local proxy that attempts to send it to the main queue server. If there is no network connection, it would try again later, however it would not lose the job or delay the client. Any recommendations?

    Read the article

  • Thread-safe blocking queue implementation on .NET

    - by Shrike
    Hello. I'm looking for an implementation of thread-safe blocking queue for .NET. By "thread-safe blocking queue" I mean: - thread-safe access to a queue where Dequeue method call blocks a thread untill other thread puts (Enqueue) some value. By the moment I'v found this one: http://www.eggheadcafe.com/articles/20060414.asp (But it's for .NET 1.1). Could someone comment/criticize correctness of this implementation. Or suggest some another one. Thanks in advance.

    Read the article

  • Iterating through std queue

    - by Ockonal
    Hi, I'm trying to use BOOST_FOREACH for iterating through the std::queue. But there isn't iterators in that class cause I have an error: std::queue<std::string> someList; BOOST_FOREACH(std::string temp, someList) { std::cout << temp; } >no matching function for call to begin(...) >no type named ‘iterator’ in ‘class std::queue<std::basic_string<char> >’ I need in structure like: the first comes, the first goes away.

    Read the article

  • How to Create a Task From an Email Message in Outlook 2013

    - by Lori Kaufman
    If you need to do something related to an email message you received, you can easily create a task from the message in Outlook. A task can be created that contains all the content of the message without requiring you to re-enter the information. Creating a task in Outlook from an email message is different from flagging the message. As it says on Microsoft’s site: “When you flag an email message, the message appears in the To-Do List in Tasks and on the Tasks peek. However, if you delete the message, it also disappears from the To-Do List in Tasks and on the Tasks peek. Flagging a message doesn’t create a separate task.” Using the method described below to create a task from an email message, the task is separate from the message. The original message can be deleted or changed and the related task will not be affected. In Outlook, make sure the Mail section is active. If not, click Mail on the Navigation Bar at the bottom of the Outlook window. Then, click on the message you want to add to a task and drag it to Tasks on the Navigation Bar. A new Task window displays containing the email message and allowing you to enter the subject of the task, the Start and Due dates, Status, Priority, among other settings. When you have specified the settings for the task, click Save & Close in the Actions section of the Task tab. When the Task window closes, the Mail section is still active. If you move your mouse over Tasks on the Navigation Bar, a snippet from the new task displays in a popup window (the Task peek). Click Tasks to go to the Tasks section of Outlook. The To-Do List displays with your newly-added task listed in the middle pane. The right pane displays the details of the task and the contents of the message included in the task (as pictured at the beginning of this article). Click on Tasks to see a complete listing of all your tasks, including the one you just added from your email message. Note that attachments in an email message added to a new task are not copied to the task. You can also create new tasks by dragging contacts, calendar items, and notes to Tasks on the Navigation Bar.     

    Read the article

  • Emails getting stuck in "messages with an unreachable destination queue" in Exchange

    - by Jason T.
    There's an exchange server with a problem that I'm trying to solve. There's a couple hundred messages that have been sent out but need journaled. They have been sent out but can't seem to make it to their journaling server. I have verified that the server they need to get to is valid and that the data center hosting the server is not having any problems. What are some other things I should look for to solve this issue? If any more information is needed please feel free to ask.

    Read the article

  • what's wrong with my producer-consumer queue design?

    - by toasteroven
    I'm starting with the C# code example here. I'm trying to adapt it for a couple reasons: 1) in my scenario, all tasks will be put in the queue up-front before consumers will start, and 2) I wanted to abstract the worker into a separate class instead of having raw Thread members within the WorkerQueue class. My queue doesn't seem to dispose of itself though, it just hangs, and when I break in Visual Studio it's stuck on the _th.Join() line for WorkerThread #1. Also, is there a better way to organize this? Something about exposing the WaitOne() and Join() methods seems wrong, but I couldn't think of an appropriate way to let the WorkerThread interact with the queue. Also, an aside - if I call q.Start(#) at the top of the using block, only some of the threads every kick in (e.g. threads 1, 2, and 8 process every task). Why is this? Is it a race condition of some sort, or am I doing something wrong? using System; using System.Collections.Generic; using System.Text; using System.Messaging; using System.Threading; using System.Linq; namespace QueueTest { class Program { static void Main(string[] args) { using (WorkQueue q = new WorkQueue()) { q.Finished += new Action(delegate { Console.WriteLine("All jobs finished"); }); Random r = new Random(); foreach (int i in Enumerable.Range(1, 10)) q.Enqueue(r.Next(100, 500)); Console.WriteLine("All jobs queued"); q.Start(8); } } } class WorkQueue : IDisposable { private Queue _jobs = new Queue(); private int _job_count; private EventWaitHandle _wh = new AutoResetEvent(false); private object _lock = new object(); private List _th; public event Action Finished; public WorkQueue() { } public void Start(int num_threads) { _job_count = _jobs.Count; _th = new List(num_threads); foreach (int i in Enumerable.Range(1, num_threads)) { _th.Add(new WorkerThread(i, this)); _th[_th.Count - 1].JobFinished += new Action(WorkQueue_JobFinished); } } void WorkQueue_JobFinished(int obj) { lock (_lock) { _job_count--; if (_job_count == 0 && Finished != null) Finished(); } } public void Enqueue(int job) { lock (_lock) _jobs.Enqueue(job); _wh.Set(); } public void Dispose() { Enqueue(Int32.MinValue); _th.ForEach(th = th.Join()); _wh.Close(); } public int GetNextJob() { lock (_lock) { if (_jobs.Count 0) return _jobs.Dequeue(); else return Int32.MinValue; } } public void WaitOne() { _wh.WaitOne(); } } class WorkerThread { private Thread _th; private WorkQueue _q; private int _i; public event Action JobFinished; public WorkerThread(int i, WorkQueue q) { _i = i; _q = q; _th = new Thread(DoWork); _th.Start(); } public void Join() { _th.Join(); } private void DoWork() { while (true) { int job = _q.GetNextJob(); if (job != Int32.MinValue) { Console.WriteLine("Thread {0} Got job {1}", _i, job); Thread.Sleep(job * 10); // in reality would to actual work here if (JobFinished != null) JobFinished(job); } else { Console.WriteLine("Thread {0} no job available", _i); _q.WaitOne(); } } } } }

    Read the article

  • C#: System.Collections.Concurrent.ConcurrentQueue vs. Queue

    - by James Michael Hare
    I love new toys, so of course when .NET 4.0 came out I felt like the proverbial kid in the candy store!  Now, some people get all excited about the IDE and it’s new features or about changes to WPF and Silver Light and yes, those are all very fine and grand.  But me, I get all excited about things that tend to affect my life on the backside of development.  That’s why when I heard there were going to be concurrent container implementations in the latest version of .NET I was salivating like Pavlov’s dog at the dinner bell. They seem so simple, really, that one could easily overlook them.  Essentially they are implementations of containers (many that mirror the generic collections, others are new) that have either been optimized with very efficient, limited, or no locking but are still completely thread safe -- and I just had to see what kind of an improvement that would translate into. Since part of my job as a solutions architect here where I work is to help design, develop, and maintain the systems that process tons of requests each second, the thought of extremely efficient thread-safe containers was extremely appealing.  Of course, they also rolled out a whole parallel development framework which I won’t get into in this post but will cover bits and pieces of as time goes by. This time, I was mainly curious as to how well these new concurrent containers would perform compared to areas in our code where we manually synchronize them using lock or some other mechanism.  So I set about to run a processing test with a series of producers and consumers that would be either processing a traditional System.Collections.Generic.Queue or a System.Collection.Concurrent.ConcurrentQueue. Now, I wanted to keep the code as common as possible to make sure that the only variance was the container, so I created a test Producer and a test Consumer.  The test Producer takes an Action<string> delegate which is responsible for taking a string and placing it on whichever queue we’re testing in a thread-safe manner: 1: internal class Producer 2: { 3: public int Iterations { get; set; } 4: public Action<string> ProduceDelegate { get; set; } 5: 6: public void Produce() 7: { 8: for (int i = 0; i < Iterations; i++) 9: { 10: ProduceDelegate(“Hello”); 11: } 12: } 13: } Then likewise, I created a consumer that took a Func<string> that would read from whichever queue we’re testing and return either the string if data exists or null if not.  Then, if the item doesn’t exist, it will do a 10 ms wait before testing again.  Once all the producers are done and join the main thread, a flag will be set in each of the consumers to tell them once the queue is empty they can shut down since no other data is coming: 1: internal class Consumer 2: { 3: public Func<string> ConsumeDelegate { get; set; } 4: public bool HaltWhenEmpty { get; set; } 5: 6: public void Consume() 7: { 8: bool processing = true; 9: 10: while (processing) 11: { 12: string result = ConsumeDelegate(); 13: 14: if(result == null) 15: { 16: if (HaltWhenEmpty) 17: { 18: processing = false; 19: } 20: else 21: { 22: Thread.Sleep(TimeSpan.FromMilliseconds(10)); 23: } 24: } 25: else 26: { 27: DoWork(); // do something non-trivial so consumers lag behind a bit 28: } 29: } 30: } 31: } Okay, now that we’ve done that, we can launch threads of varying numbers using lambdas for each different method of production/consumption.  First let's look at the lambdas for a typical System.Collections.Generics.Queue with locking: 1: // lambda for putting to typical Queue with locking... 2: var productionDelegate = s => 3: { 4: lock (_mutex) 5: { 6: _mutexQueue.Enqueue(s); 7: } 8: }; 9:  10: // and lambda for typical getting from Queue with locking... 11: var consumptionDelegate = () => 12: { 13: lock (_mutex) 14: { 15: if (_mutexQueue.Count > 0) 16: { 17: return _mutexQueue.Dequeue(); 18: } 19: } 20: return null; 21: }; Nothing new or interesting here.  Just typical locks on an internal object instance.  Now let's look at using a ConcurrentQueue from the System.Collections.Concurrent library: 1: // lambda for putting to a ConcurrentQueue, notice it needs no locking! 2: var productionDelegate = s => 3: { 4: _concurrentQueue.Enqueue(s); 5: }; 6:  7: // lambda for getting from a ConcurrentQueue, once again, no locking required. 8: var consumptionDelegate = () => 9: { 10: string s; 11: return _concurrentQueue.TryDequeue(out s) ? s : null; 12: }; So I pass each of these lambdas and the number of producer and consumers threads to launch and take a look at the timing results.  Basically I’m timing from the time all threads start and begin producing/consuming to the time that all threads rejoin.  I won't bore you with the test code, basically it just launches code that creates the producers and consumers and launches them in their own threads, then waits for them all to rejoin.  The following are the timings from the start of all threads to the Join() on all threads completing.  The producers create 10,000,000 items evenly between themselves and then when all producers are done they trigger the consumers to stop once the queue is empty. These are the results in milliseconds from the ordinary Queue with locking: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 4284 5153 4226 4554.33 4: 10 10 4044 3831 5010 4295.00 5: 100 100 5497 5378 5612 5495.67 6: 1000 1000 24234 25409 27160 25601.00 And the following are the results in milliseconds from the ConcurrentQueue with no locking necessary: 1: Consumers Producers 1 2 3 Time (ms) 2: ---------- ---------- ------ ------ ------ --------- 3: 1 1 3647 3643 3718 3669.33 4: 10 10 2311 2136 2142 2196.33 5: 100 100 2480 2416 2190 2362.00 6: 1000 1000 7289 6897 7061 7082.33 Note that even though obviously 2000 threads is quite extreme, the concurrent queue actually scales really well, whereas the traditional queue with simple locking scales much more poorly. I love the new concurrent collections, they look so much simpler without littering your code with the locking logic, and they perform much better.  All in all, a great new toy to add to your arsenal of multi-threaded processing!

    Read the article

  • Asynchronous message queues and processing like Amazon Simple Queue service in django

    - by becomingGuru
    There are many activities on an application that need things like: Send email, Post to twitter thumbnail an image, into several sizes call a cron to find connected relationships A good way to do these tasks is to write into an asynchronous queue on which operations are performed. What django application can be used to implement such functionality, as the one Amazon Simple Queue service offers, locally? I have come across celery. Right thing? Anything else that exists, like this?

    Read the article

  • Task Queue Java API

    - by user268515
    Hi when i started to work on Task queue concept i got struck on this line queue.add( DatastoreServiceFactory.getDatastoreService().getCurrentTransaction(), TaskOptions().url("/path/to/my/worker")); What will DatastoreServiceFactory do... How to redirect this page to another servlet.... in the url i gave .url("/myservlet") but it doesn't redirected to servlet Please say what should given in .url.Help me. Regards, sharun

    Read the article

  • Using Message Queue on Windows Mobile 2003

    - by Fitzroy Wright
    Does anyone know where I can find the cab file that will allow me to use Microsoft Message Queues on a Windows Mobile 2003 device? I am writing application that needs to use Microsoft Message Queue on a Windows Mobile 2003 device. Apparently message queue was never installed on the device. I have scoured the web and can find no cab files for Msmq for windows mobile 2003. everything reffers to windows mobile 5 and when I try that the setup fails.

    Read the article

  • Stack and Queue, Why?

    - by Alon
    Why and when should I use stack or queue data structures instead of arrays/lists? Can you please show an example for a state thats it'll be better if you'll use stack or queue? Thanks.

    Read the article

  • IP queue buffer

    - by summerbulb
    I seem to have an issue with IP queue. I have a linux machine that I am using to run some experiments. The linux machine is configured to be a router, having two NICs, connecting two other computers, and managing their network traffic. All incoming packages are captured, using iptables, and analyzed by a C application. The application analyzing the packets has a built-in delay, as part of the experiment. So I have one very fast computer sending packets through my linux-router and a (relatively) slow linux-router that analyses and deals with the packets, one by one. This situation leads to the fact that when I fire up a sender application on one of the computers connected to the linux-router, my IP queue on the linux-router gets filled up (almost) instantaneously. The IP queue's max length is currently set to 1024, and if it overflows, the packets are dropped. This is expected and i'm OK with it. But, (and this is where it gets interesting), every now and then I get the following error: "Failed to receive netlink message: No buffer space available" At start, I thought this was due to the IP queue overflow, but after some analysis i found that sometimes I get the error even if the IP queue buffer did not overflow, and sometime I DON'T get the message even though the buffer DID overflow. When I run > cat /proc/net/ip_queue, I get the following table (also used to monitor the IP queue overflow): Peer PID : 27389 Copy mode : 2 Copy range : 65535 Queue length : 0 Queue max. length : 1024 Queue dropped : 1166875 Netlink dropped : 2916 Looking at the last two values, Queue dropped seems to refer to packets that did not manage to get into the IP queue because the buffer was full. I can see this value rise as I bombard the linux-router. Netlink dropped ( as it's name implies :) ) seems to have to do with the error i'm getting. I did my best to search for material on this error, but wasn't able to find anything that seemed to point me in the required direction. Bottom line: Why am I getting this error and what can I do to avoid it?

    Read the article

  • Ineractive logon message doesn't show?

    - by user19049
    We set the Group Policy for our Domain and OUs to show the logon message: Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options\ Interactive logon: Message text for users attempting to log on but nothing happen when the client computer logon. I checked the RSOP.MSC to see the group policy that set and the message is set on client computer but it doesn't show at the logon prompt. so what should I do? We have Windows server 2003 AD and Windows XP Pro on client computer.

    Read the article

  • Thread loses Message after wait() and notify()

    - by fugu2.0
    Hey Guys! I have a problem handling messages in a Thread. My run-method looks like this public void run() { Looper.prepareLooper(); parserHandler = new Handler { public void handleMessage(Message msg) { Log.i("","id from message: "+msg.getData.getString("id")); // handle message this.wait(); } } } I have several Activities sending messages to this thread, like this: Message parserMessage = new Message(); Bundle data = new Bundle(); data.putString("id", realId); data.putString("callingClass", "CategoryList"); parserMessage.setData(data); parserMessage.what = PARSE_CATEGORIES_OR_PRODUCTS; parserHandler = parser.getParserHandler(); synchronized (parserHandler) { parserHandler.notify(); Log.i("","message ID: " + parserMessage.getData().getString("id")); } parserHandler.sendMessage(parserMessage); The problem is that the run-method logs "id from message: null" though "message ID" has a value in the Log-statement. Why does the message "lose" it's data when being send to the thread? Has it something to do with the notify? Thanks for your help

    Read the article

  • GAE Task Queue oddness

    - by b3nw
    I have been testing the taskqueue with mixed success. Currently I am using the default queue, in default settings ect ect.... I have a test url setup which inserts about 8 tasks into the queue. With short order, all 8 are completed properly. So far so good. The problem comes up when I re-load that url twice under say a minute. Now watching the task queue, all the tasks are added properly, but only the first batch execute it seems. But the "Run in Last Minute" # shows the right number of tasks being run.... The request logs tell a different story. They show only the first set of 8 running, but all task creation urls working successfully. The oddness of this is that if I wait say a minute between the task creation url requests, it will work fine. Oddly enough changing the bucket_size or execution speed does not seem to help. Only the first batch are executed. I have also reduced the number of requests all the way down to 2, and still found only the first 2 execute. Any others added display the same issues as above. Any suggestions? Thanks

    Read the article

  • Polymorphic Queue

    - by metdos
    Hello Everyone, I'm trying to implement a Polymorphic Queue. Here is my trial: QQueue <Request *> requests; while(...) { QString line = QString::fromUtf8(client->readLine()).trimmed(); if(...)){ Request *request=new Request(); request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); //this initialize variables in request using tcpMessage if(request->requestType==REQUEST_LOGIN){ LoginRequest loginRequest; request=&loginRequest; request->tcpMessage=line.toUtf8(); request->decodeFromTcpMessage(); requests.enqueue(request); } //Here pointers in "requests" do not point to objects I created above, and I noticed that their destructors are also called. LoginRequest *loginRequest2=dynamic_cast<LoginRequest *>(requests.dequeue()); loginRequest2->decodeFromTcpMessage(); } } Unfortunately, I could not manage to make work Polymorphic Queue with this code because of the reason I mentioned in second comment.I guess, I need to use smart-pointers, but how? I'm open to any improvement of my code or a new implementation of polymorphic queue. Thanks.

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • Spring validator default message codes not resolving

    - by Derek Clarkson
    Hi all, I have a custom spring validator which has the following default message: public @interface FieldMatch { String message() default "au.com.xxx.website.FieldMatchValidation"; ... The problem I'm having is that the message code is not being resolved and <form:error...> is simply displaying the code rather than the message (Which is in a properties file which is being used by a ResourceBundleMessageSource). I've also tried this as String message() default "{au.com.xxx.website.FieldMatchValidation}"; Which causes the message source to crash with an exception indicating that it thinks that the "{}" brackets should contain a number because it thinks it's a parameter place holder. I think that the issue is that the message code is not being seen as a message code and therefore not resolved, but I cannot figure out why. Any suggestions?

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • What's the best way of accessing a DRb object (e.g. Ruby Queue) from Scala (and Java)?

    - by Tom Morris
    I have built a variety of little scripts using Ruby's very simple Queue class, and share the Queue between Ruby and JRuby processes using DRb. It would be nice to be able to access these from Scala (and maybe Java) using JRuby. I've put together something Scala and the JSR-223 interface to access jruby-complete.jar. import javax.script._ class DRbQueue(host: String, port: Int) { private var engine = DRbQueue.factory.getEngineByName("jruby") private var invoker = engine.asInstanceOf[Invocable] engine.eval("require \"drb\" ") private var queue = engine.eval("DRbObject.new(nil, \"druby://" + host + ":" + port.toString + "\")") def isEmpty(): Boolean = invoker.invokeMethod(this.queue, "empty?").asInstanceOf[Boolean] def size(): Long = invoker.invokeMethod(this.queue, "length").asInstanceOf[Long] def threadsWaiting: Long = invoker.invokeMethod(this.queue, "num_waiting").asInstanceOf[Long] def offer(obj: Any) = invoker.invokeMethod(this.queue, "push", obj.asInstanceOf[java.lang.Object]) def poll(): Any = invoker.invokeMethod(this.queue, "pop") def clear(): Unit = { invoker.invokeMethod(this.queue, "clear") } } object DRbQueue { var factory = new ScriptEngineManager() } (It conforms roughly to java.util.Queue interface, but I haven't declared the interface because it doesn't implement the element and peek methods because the Ruby class doesn't offer them.) The problem with this is the type conversion. JRuby is fine with Scala's Strings - because they are Java strings. But if I give it a Scala Int or Long, or one of the other Scala types (List, Set, RichString, Array, Symbol) or some other custom type. This seems unnecessarily hacky: surely there has got to be a better way of doing RMI/DRb interop without having to use JSR-223 API. I could either make it so that the offer method serializes the object to, say, a JSON string and takes a structural type of only objects that have a toJson method. I could then write a Ruby wrapper class (or just monkeypatch Queue) to would parse the JSON. Is there any point in carrying on with trying to access DRb from Java/Scala? Might it just be easier to install a real message queue? (If so, any suggestions for a lightweight JVM-based MQ?)

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • Task queue java

    - by user268515
    Hi i'm new to Task queue concepts when i referred the guide I got struck on this line queue.add( DatastoreServiceFactory.getDatastoreService().getCurrentTransaction(), TaskOptions().url("/path/to/my/worker")); what is TaskOptions() method. Is it default method are method created manually what will TaskOptions() method will return. I created a method called TaskOption() when i to return a string value its saying error as "The method url(String) is undefined for the type String" In url what i want to specify servlet are any other. My doubt may be stupid but plz clarify it. Thank you, sharun.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >