Search Results

Search found 2619 results on 105 pages for 'edge transport'.

Page 2/105 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Compare images after canny edge detection in OpenCV (C++)

    - by typoknig
    Hi all, I am working on an OpenCV project and I need to compare some images after canny has been applied to both of them. Before the canny was applied I had the gray scale images populating a histogram and then I compared the histograms, but when canny is added to the images the histogram does not populate. I have read that a canny image can populate a histogram, but have not found a way to make it happen. I do not necessairly need to keep using the histograms, I just want to know the best way to compare two canny images. SSCCE below for you to chew on. I have poached and patched about 75% of this code from books and various sites on the internet, so props to those guys... // SLC (Histogram).cpp : Defines the entry point for the console application. #include "stdafx.h" #include <cxcore.h> #include <cv.h> #include <cvaux.h> #include <highgui.h> #include <stdio.h> #include <sstream> #include <iostream> using namespace std; IplImage* image1= 0; IplImage* imgHistogram1 = 0; IplImage* gray1= 0; CvHistogram* hist1; int main(){ CvCapture* capture = cvCaptureFromCAM(0); if(!cvQueryFrame(capture)){ cout<<"Video capture failed, please check the camera."<<endl; } else{ cout<<"Video camera capture successful!"<<endl; }; CvSize sz = cvGetSize(cvQueryFrame(capture)); IplImage* image = cvCreateImage(sz, 8, 3); IplImage* imgHistogram = 0; IplImage* gray = 0; CvHistogram* hist; cvNamedWindow("Image Source",1); cvNamedWindow("gray", 1); cvNamedWindow("Histogram",1); cvNamedWindow("BG", 1); cvNamedWindow("FG", 1); cvNamedWindow("Canny",1); cvNamedWindow("Canny1", 1); image1 = cvLoadImage("image bin/use this image.jpg");// an image has to load here or the program will not run //size of the histogram -1D histogram int bins1 = 256; int hsize1[] = {bins1}; //max and min value of the histogram float max_value1 = 0, min_value1 = 0; //value and normalized value float value1; int normalized1; //ranges - grayscale 0 to 256 float xranges1[] = { 0, 256 }; float* ranges1[] = { xranges1 }; //create an 8 bit single channel image to hold a //grayscale version of the original picture gray1 = cvCreateImage( cvGetSize(image1), 8, 1 ); cvCvtColor( image1, gray1, CV_BGR2GRAY ); IplImage* canny1 = cvCreateImage(cvGetSize(gray1), 8, 1 ); cvCanny( gray1, canny1, 55, 175, 3 ); //Create 3 windows to show the results cvNamedWindow("original1",1); cvNamedWindow("gray1",1); cvNamedWindow("histogram1",1); //planes to obtain the histogram, in this case just one IplImage* planes1[] = { canny1 }; //get the histogram and some info about it hist1 = cvCreateHist( 1, hsize1, CV_HIST_ARRAY, ranges1,1); cvCalcHist( planes1, hist1, 0, NULL); cvGetMinMaxHistValue( hist1, &min_value1, &max_value1); printf("min: %f, max: %f\n", min_value1, max_value1); //create an 8 bits single channel image to hold the histogram //paint it white imgHistogram1 = cvCreateImage(cvSize(bins1, 50),8,1); cvRectangle(imgHistogram1, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //draw the histogram :P for(int i=0; i < bins1; i++){ value1 = cvQueryHistValue_1D( hist1, i); normalized1 = cvRound(value1*50/max_value1); cvLine(imgHistogram1,cvPoint(i,50), cvPoint(i,50-normalized1), CV_RGB(0,0,0)); } //show the image results cvShowImage( "original1", image1 ); cvShowImage( "gray1", gray1 ); cvShowImage( "histogram1", imgHistogram1 ); cvShowImage( "Canny1", canny1); CvBGStatModel* bg_model = cvCreateFGDStatModel( image ); for(;;){ image = cvQueryFrame(capture); cvUpdateBGStatModel( image, bg_model ); //Size of the histogram -1D histogram int bins = 256; int hsize[] = {bins}; //Max and min value of the histogram float max_value = 0, min_value = 0; //Value and normalized value float value; int normalized; //Ranges - grayscale 0 to 256 float xranges[] = {0, 256}; float* ranges[] = {xranges}; //Create an 8 bit single channel image to hold a grayscale version of the original picture gray = cvCreateImage(cvGetSize(image), 8, 1); cvCvtColor(image, gray, CV_BGR2GRAY); IplImage* canny = cvCreateImage(cvGetSize(gray), 8, 1 ); cvCanny( gray, canny, 55, 175, 3 );//55, 175, 3 with direct light //Planes to obtain the histogram, in this case just one IplImage* planes[] = {canny}; //Get the histogram and some info about it hist = cvCreateHist(1, hsize, CV_HIST_ARRAY, ranges,1); cvCalcHist(planes, hist, 0, NULL); cvGetMinMaxHistValue(hist, &min_value, &max_value); //printf("Minimum Histogram Value: %f, Maximum Histogram Value: %f\n", min_value, max_value); //Create an 8 bits single channel image to hold the histogram and paint it white imgHistogram = cvCreateImage(cvSize(bins, 50),8,3); cvRectangle(imgHistogram, cvPoint(0,0), cvPoint(256,50), CV_RGB(255,255,255),-1); //Draw the histogram for(int i=0; i < bins; i++){ value = cvQueryHistValue_1D(hist, i); normalized = cvRound(value*50/max_value); cvLine(imgHistogram,cvPoint(i,50), cvPoint(i,50-normalized), CV_RGB(0,0,0)); } double correlation = cvCompareHist (hist1, hist, CV_COMP_CORREL); double chisquare = cvCompareHist (hist1, hist, CV_COMP_CHISQR); double intersection = cvCompareHist (hist1, hist, CV_COMP_INTERSECT); double bhattacharyya = cvCompareHist (hist1, hist, CV_COMP_BHATTACHARYYA); double difference = (1 - correlation) + chisquare + (1 - intersection) + bhattacharyya; printf("correlation: %f\n", correlation); printf("chi-square: %f\n", chisquare); printf("intersection: %f\n", intersection); printf("bhattacharyya: %f\n", bhattacharyya); printf("difference: %f\n", difference); cvShowImage("Image Source", image); cvShowImage("gray", gray); cvShowImage("Histogram", imgHistogram); cvShowImage( "Canny", canny); cvShowImage("BG", bg_model->background); cvShowImage("FG", bg_model->foreground); //Page 19 paragraph 3 of "Learning OpenCV" tells us why we DO NOT use "cvReleaseImage(&image)" in this section cvReleaseImage(&imgHistogram); cvReleaseImage(&gray); cvReleaseHist(&hist); cvReleaseImage(&canny); char c = cvWaitKey(10); //if ASCII key 27 (esc) is pressed then loop breaks if(c==27) break; } cvReleaseBGStatModel( &bg_model ); cvReleaseImage(&image); cvReleaseCapture(&capture); cvDestroyAllWindows(); }

    Read the article

  • Multi-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • Mulit-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • JAVA Casting error

    - by user1612725
    Im creating a program that uses Dijksrtras algorithm, im using nodes that represent cities on an imported map, and you can create edges between two cities on the map. My problem is every edge has a "weight" where it will represent distance in minutes and i have a function where i want to see the distance between the two edges. But i keep getting the error "Cannot cast from Stad to Edge" at the line Edge<Stad> selectedEdge = (Edge) fvf.visaFörbLista.getSelectedValue(); where "Stad" represents the city and "Edge" an edge. FormVisaförbindelse fvf = new FormVisaförbindelse(); for(;;){ try{ int svar = showConfirmDialog(null, fvf, "Ändra Förbindelser", JOptionPane.OK_CANCEL_OPTION); if (svar != YES_OPTION) return; if (fvf.visaFörbLista.isSelectionEmpty() == true){ showMessageDialog(mainMethod.this, "En Förbindelse måste valjas.","Fel!", ERROR_MESSAGE); return; } Edge<Stad> selectedEdge = (Edge) fvf.visaFörbLista.getSelectedValue(); FormÄndraförbindelse faf = new FormÄndraförbindelse(); faf.setförbNamn(selectedEdge.getNamn()); for(;;){ try{ int svar2 = showConfirmDialog(mainMethod.this, faf, "Ändra Förbindelse", OK_CANCEL_OPTION); if (svar2 != YES_OPTION) return; selectedEdge.setVikt(faf.getförbTid()); List<Edge<Stad>> edges = lg.getEdgesBetween(sB, sA); for (Edge<Stad> edge : edges){ if (edge.getNamn()==selectedEdge.getNamn()){ edge.setVikt(faf.getförbTid()); } } return; } catch(NumberFormatException e){ showMessageDialog(mainMethod.this, "Ogiltig inmatning.","Fel!", ERROR_MESSAGE); }

    Read the article

  • Oracle B2B 11g - Transport Layer Acknowledgement

    - by Nitesh Jain Oracle
    In Health Care Industry,Acknowledgement or Response should be sent back very fast. Once any message received, Acknowledgement should be sent back to TP. Oracle B2B provides a solution to send acknowledgement or Response from transport layer of mllp that is called as immediate acknowledgment. Immediate acknowledgment is generated and transmitted in the transport layer. It is an alternative to the functional acknowledgment, which generates after processing/validating the data in document layer. Oracle B2B provides four types of immediate acknowledgment: Default: Oracle B2B parses the incoming HL7 message and generates an acknowledgment from it. This mode uses the details from incoming payload and generate the acknowledgement based on incoming HL7 message control number, sender and application identification. By default, an Immediate ACK is a generic ACK. Trigger event can also sent back by using Map Trigger Event property. If mapping the MSH.10 of the ACK with the MSH.10 of the incoming business message is required, then enable the Map ACK Control ID property. Simple: B2B sends the predefined acknowledgment message to the sender without parsing the incoming message. Custom: Custom immediate Ack/Response mode gives a user to define their own response/acknowledgement. This is configurable using file in the Custom Immediate ACK File property. Negative: In this case, immediate ACK will be returned only in the case of exceptions.

    Read the article

  • Scaling-out Your Services by Message Bus based WCF Transport Extension &ndash; Part 1 &ndash; Background

    - by Shaun
    Cloud computing gives us more flexibility on the computing resource, we can provision and deploy an application or service with multiple instances over multiple machines. With the increment of the service instances, how to balance the incoming message and workload would become a new challenge. Currently there are two approaches we can use to pass the incoming messages to the service instances, I would like call them dispatcher mode and pulling mode.   Dispatcher Mode The dispatcher mode introduces a role which takes the responsible to find the best service instance to process the request. The image below describes the sharp of this mode. There are four clients communicate with the service through the underlying transportation. For example, if we are using HTTP the clients might be connecting to the same service URL. On the server side there’s a dispatcher listening on this URL and try to retrieve all messages. When a message came in, the dispatcher will find a proper service instance to process it. There are three mechanism to find the instance: Round-robin: Dispatcher will always send the message to the next instance. For example, if the dispatcher sent the message to instance 2, then the next message will be sent to instance 3, regardless if instance 3 is busy or not at that moment. Random: Dispatcher will find a service instance randomly, and same as the round-robin mode it regardless if the instance is busy or not. Sticky: Dispatcher will send all related messages to the same service instance. This approach always being used if the service methods are state-ful or session-ful. But as you can see, all of these approaches are not really load balanced. The clients will send messages at any time, and each message might take different process duration on the server side. This means in some cases, some of the service instances are very busy while others are almost idle. For example, if we were using round-robin mode, it could be happened that most of the simple task messages were passed to instance 1 while the complex ones were sent to instance 3, even though instance 1 should be idle. This brings some problem in our architecture. The first one is that, the response to the clients might be longer than it should be. As it’s shown in the figure above, message 6 and 9 can be processed by instance 1 or instance 2, but in reality they were dispatched to the busy instance 3 since the dispatcher and round-robin mode. Secondly, if there are many requests came from the clients in a very short period, service instances might be filled by tons of pending tasks and some instances might be crashed. Third, if we are using some cloud platform to host our service instances, for example the Windows Azure, the computing resource is billed by service deployment period instead of the actual CPU usage. This means if any service instance is idle it is wasting our money! Last one, the dispatcher would be the bottleneck of our system since all incoming messages must be routed by the dispatcher. If we are using HTTP or TCP as the transport, the dispatcher would be a network load balance. If we wants more capacity, we have to scale-up, or buy a hardware load balance which is very expensive, as well as scaling-out the service instances. Pulling Mode Pulling mode doesn’t need a dispatcher to route the messages. All service instances are listening to the same transport and try to retrieve the next proper message to process if they are idle. Since there is no dispatcher in pulling mode, it requires some features on the transportation. The transportation must support multiple client connection and server listening. HTTP and TCP doesn’t allow multiple clients are listening on the same address and port, so it cannot be used in pulling mode directly. All messages in the transportation must be FIFO, which means the old message must be received before the new one. Message selection would be a plus on the transportation. This means both service and client can specify some selection criteria and just receive some specified kinds of messages. This feature is not mandatory but would be very useful when implementing the request reply and duplex WCF channel modes. Otherwise we must have a memory dictionary to store the reply messages. I will explain more about this in the following articles. Message bus, or the message queue would be best candidate as the transportation when using the pulling mode. First, it allows multiple application to listen on the same queue, and it’s FIFO. Some of the message bus also support the message selection, such as TIBCO EMS, RabbitMQ. Some others provide in memory dictionary which can store the reply messages, for example the Redis. The principle of pulling mode is to let the service instances self-managed. This means each instance will try to retrieve the next pending incoming message if they finished the current task. This gives us more benefit and can solve the problems we met with in the dispatcher mode. The incoming message will be received to the best instance to process, which means this will be very balanced. And it will not happen that some instances are busy while other are idle, since the idle one will retrieve more tasks to make them busy. Since all instances are try their best to be busy we can use less instances than dispatcher mode, which more cost effective. Since there’s no dispatcher in the system, there is no bottleneck. When we introduced more service instances, in dispatcher mode we have to change something to let the dispatcher know the new instances. But in pulling mode since all service instance are self-managed, there no extra change at all. If there are many incoming messages, since the message bus can queue them in the transportation, service instances would not be crashed. All above are the benefits using the pulling mode, but it will introduce some problem as well. The process tracking and debugging become more difficult. Since the service instances are self-managed, we cannot know which instance will process the message. So we need more information to support debug and track. Real-time response may not be supported. All service instances will process the next message after the current one has done, if we have some real-time request this may not be a good solution. Compare with the Pros and Cons above, the pulling mode would a better solution for the distributed system architecture. Because what we need more is the scalability, cost-effect and the self-management.   WCF and WCF Transport Extensibility Windows Communication Foundation (WCF) is a framework for building service-oriented applications. In the .NET world WCF is the best way to implement the service. In this series I’m going to demonstrate how to implement the pulling mode on top of a message bus by extending the WCF. I don’t want to deep into every related field in WCF but will highlight its transport extensibility. When we implemented an RPC foundation there are many aspects we need to deal with, for example the message encoding, encryption, authentication and message sending and receiving. In WCF, each aspect is represented by a channel. A message will be passed through all necessary channels and finally send to the underlying transportation. And on the other side the message will be received from the transport and though the same channels until the business logic. This mode is called “Channel Stack” in WCF, and the last channel in the channel stack must always be a transport channel, which takes the responsible for sending and receiving the messages. As we are going to implement the WCF over message bus and implement the pulling mode scaling-out solution, we need to create our own transport channel so that the client and service can exchange messages over our bus. Before we deep into the transport channel, let’s have a look on the message exchange patterns that WCF defines. Message exchange pattern (MEP) defines how client and service exchange the messages over the transportation. WCF defines 3 basic MEPs which are datagram, Request-Reply and Duplex. Datagram: Also known as one-way, or fire-forgot mode. The message sent from the client to the service, and no need any reply from the service. The client doesn’t care about the message result at all. Request-Reply: Very common used pattern. The client send the request message to the service and wait until the reply message comes from the service. Duplex: The client sent message to the service, when the service processing the message it can callback to the client. When callback the service would be like a client while the client would be like a service. In WCF, each MEP represent some channels associated. MEP Channels Datagram IInputChannel, IOutputChannel Request-Reply IRequestChannel, IReplyChannel Duplex IDuplexChannel And the channels are created by ChannelListener on the server side, and ChannelFactory on the client side. The ChannelListener and ChannelFactory are created by the TransportBindingElement. The TransportBindingElement is created by the Binding, which can be defined as a new binding or from a custom binding. For more information about the transport channel mode, please refer to the MSDN document. The figure below shows the transport channel objects when using the request-reply MEP. And this is the datagram MEP. And this is the duplex MEP. After investigated the WCF transport architecture, channel mode and MEP, we finally identified what we should do to extend our message bus based transport layer. They are: Binding: (Optional) Defines the channel elements in the channel stack and added our transport binding element at the bottom of the stack. But we can use the build-in CustomBinding as well. TransportBindingElement: Defines which MEP is supported in our transport and create the related ChannelListener and ChannelFactory. This also defines the scheme of the endpoint if using this transport. ChannelListener: Create the server side channel based on the MEP it’s. We can have one ChannelListener to create channels for all supported MEPs, or we can have ChannelListener for each MEP. In this series I will use the second approach. ChannelFactory: Create the client side channel based on the MEP it’s. We can have one ChannelFactory to create channels for all supported MEPs, or we can have ChannelFactory for each MEP. In this series I will use the second approach. Channels: Based on the MEPs we want to support, we need to implement the channels accordingly. For example, if we want our transport support Request-Reply mode we should implement IRequestChannel and IReplyChannel. In this series I will implement all 3 MEPs listed above one by one. Scaffold: In order to make our transport extension works we also need to implement some scaffold stuff. For example we need some classes to send and receive message though out message bus. We also need some codes to read and write the WCF message, etc.. These are not necessary but would be very useful in our example.   Message Bus There is only one thing remained before we can begin to implement our scaling-out support WCF transport, which is the message bus. As I mentioned above, the message bus must have some features to fulfill all the WCF MEPs. In my company we will be using TIBCO EMS, which is an enterprise message bus product. And I have said before we can use any message bus production if it’s satisfied with our requests. Here I would like to introduce an interface to separate the message bus from the WCF. This allows us to implement the bus operations by any kinds bus we are going to use. The interface would be like this. 1: public interface IBus : IDisposable 2: { 3: string SendRequest(string message, bool fromClient, string from, string to = null); 4:  5: void SendReply(string message, bool fromClient, string replyTo); 6:  7: BusMessage Receive(bool fromClient, string replyTo); 8: } There are only three methods for the bus interface. Let me explain one by one. The SendRequest method takes the responsible for sending the request message into the bus. The parameters description are: message: The WCF message content. fromClient: Indicates if this message was came from the client. from: The channel ID that this message was sent from. The channel ID will be generated when any kinds of channel was created, which will be explained in the following articles. to: The channel ID that this message should be received. In Request-Reply and Duplex MEP this is necessary since the reply message must be received by the channel which sent the related request message. The SendReply method takes the responsible for sending the reply message. It’s very similar as the previous one but no “from” parameter. This is because it’s no need to reply a reply message again in any MEPs. The Receive method takes the responsible for waiting for a incoming message, includes the request message and specified reply message. It returned a BusMessage object, which contains some information about the channel information. The code of the BusMessage class is 1: public class BusMessage 2: { 3: public string MessageID { get; private set; } 4: public string From { get; private set; } 5: public string ReplyTo { get; private set; } 6: public string Content { get; private set; } 7:  8: public BusMessage(string messageId, string fromChannelId, string replyToChannelId, string content) 9: { 10: MessageID = messageId; 11: From = fromChannelId; 12: ReplyTo = replyToChannelId; 13: Content = content; 14: } 15: } Now let’s implement a message bus based on the IBus interface. Since I don’t want you to buy and install the TIBCO EMS or any other message bus products, I will implement an in process memory bus. This bus is only for test and sample purpose. It can only be used if the service and client are in the same process. Very straightforward. 1: public class InProcMessageBus : IBus 2: { 3: private readonly ConcurrentDictionary<Guid, InProcMessageEntity> _queue; 4: private readonly object _lock; 5:  6: public InProcMessageBus() 7: { 8: _queue = new ConcurrentDictionary<Guid, InProcMessageEntity>(); 9: _lock = new object(); 10: } 11:  12: public string SendRequest(string message, bool fromClient, string from, string to = null) 13: { 14: var entity = new InProcMessageEntity(message, fromClient, from, to); 15: _queue.TryAdd(entity.ID, entity); 16: return entity.ID.ToString(); 17: } 18:  19: public void SendReply(string message, bool fromClient, string replyTo) 20: { 21: var entity = new InProcMessageEntity(message, fromClient, null, replyTo); 22: _queue.TryAdd(entity.ID, entity); 23: } 24:  25: public BusMessage Receive(bool fromClient, string replyTo) 26: { 27: InProcMessageEntity e = null; 28: while (true) 29: { 30: lock (_lock) 31: { 32: var entity = _queue 33: .Where(kvp => kvp.Value.FromClient == fromClient && (kvp.Value.To == replyTo || string.IsNullOrWhiteSpace(kvp.Value.To))) 34: .FirstOrDefault(); 35: if (entity.Key != Guid.Empty && entity.Value != null) 36: { 37: _queue.TryRemove(entity.Key, out e); 38: } 39: } 40: if (e == null) 41: { 42: Thread.Sleep(100); 43: } 44: else 45: { 46: return new BusMessage(e.ID.ToString(), e.From, e.To, e.Content); 47: } 48: } 49: } 50:  51: public void Dispose() 52: { 53: } 54: } The InProcMessageBus stores the messages in the objects of InProcMessageEntity, which can take some extra information beside the WCF message itself. 1: public class InProcMessageEntity 2: { 3: public Guid ID { get; set; } 4: public string Content { get; set; } 5: public bool FromClient { get; set; } 6: public string From { get; set; } 7: public string To { get; set; } 8:  9: public InProcMessageEntity() 10: : this(string.Empty, false, string.Empty, string.Empty) 11: { 12: } 13:  14: public InProcMessageEntity(string content, bool fromClient, string from, string to) 15: { 16: ID = Guid.NewGuid(); 17: Content = content; 18: FromClient = fromClient; 19: From = from; 20: To = to; 21: } 22: }   Summary OK, now I have all necessary stuff ready. The next step would be implementing our WCF message bus transport extension. In this post I described two scaling-out approaches on the service side especially if we are using the cloud platform: dispatcher mode and pulling mode. And I compared the Pros and Cons of them. Then I introduced the WCF channel stack, channel mode and the transport extension part, and identified what we should do to create our own WCF transport extension, to let our WCF services using pulling mode based on a message bus. And finally I provided some classes that need to be used in the future posts that working against an in process memory message bus, for the demonstration purpose only. In the next post I will begin to implement the transport extension step by step.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • A Deep Dive into Transport Queues (Part 2)

    Johan Veldhuis completes his 'Deep Dive' by plunging even deeper into the mysteries of MS Exchange's Transport queues that are used to temporarily store messages which are waiting until they are passed through to the next stage, and explains how to change the way they work via configuration settings.

    Read the article

  • A Deep Dive into Transport Queues - Part 1

    Submission queues? Poison message queues? Johan Veldhuis unlocks the mysteries of MS Exchange's Transport queues that used to temporarily store messages waiting until they are passed through to the next stage, and explains how to manage these queues.

    Read the article

  • How to implement soft edge areas with particles

    - by OpherV
    My game is created using Phaser, but the question itself is engine-agnostic. In my game I have several environments, essentially polygonal areas that player characters can move into and be affected by. For example ice, fire, poison etc' The graphic element of these areas is the color filled polygon area itself, and particles of the suitable type (in this example ice shards). This is how I'm currently implementing this - with a polygon mask covering a tilesprite with the particle pattern: The hard edge looks bad. I'd like to improve by doing two things: 1. Making the polygon fill area to have a soft edge, and blend into the background. 2. Have some of the shards go out of the polygon area, so that they are not cut in the middle and the area doesn't have a straight line for example (mockup): I think 1 can be achieved with blurring the polygon, but I'm not sure how to go about with 2. How would you go about implementing this?

    Read the article

  • Ubuntu on Thinkpad Edge 11

    - by lasseespeholt
    Hi, I think a community wiki on problems (and solutions) when installing Ubuntu (10.10) on a Thinkpad Edge 11 would be nice (because I just got one ;)). I'll contribute with my own problems and solutions, and hope others will join too. Thinkwiki entry for the Edge 11 Known problems: No wifi-driver, solution: answer #1, answer #2 Fan is load, even though it's on auto. No solution. Thinkfan is a possible solution, but correction values for sensors should be supplied (mapping sensors to specific areas). Also, one sensor is between -100C and +100C - maybe some kind of deactivation would help.

    Read the article

  • Why is JavaMail Transport.send() a static method?

    - by skiphoppy
    I'm revising code I did not write that uses JavaMail, and having a little trouble understanding why the JavaMail API is designed the way it is. I have the feeling that if I understood, I could be doing a better job. We call: transport = session.getTransport("smtp"); transport.connect(hostName, port, user, password); So why is Eclipse warning me that this: transport.send(message, message.getAllRecipients()); is a call to a static method? Why am I getting a Transport object and providing settings that are specific to it if I can't use that object to send the message? How does the Transport class even know what server and other settings to use to send the message? It's working fine, which is hard to believe. What if I had instantiated Transport objects for two different servers; how would it know which one to use? In the course of writing this question, I've discovered that I should really be calling: transport.sendMessage(message, message.getAllRecipients()); So what is the purpose of the static Transport.send() method? Is this just poor design, or is there a reason it is this way?

    Read the article

  • Can&rsquo;t install Transport HUB in Exchange 2010

    - by Kelly Jones
    When building my latest SharePoint 2010 demo virtual machines, I decided to try installing Exchange 2010 as well.  Now, I’m not an Exchange admin, but I thought “how hard can this be?”  Well, a little more than I thought. Pretty early during the install, I got an error saying that it couldn’t “install Transport HUB”.  I double checked that my VM was meeting all of the requirements, both hardware and software, and everything looked fine. After much researching, it turns out that the error was caused by not having IPv6 enabled on the network adapter inside the virtual machine.  I had turned it off because I thought I wouldn’t need it.  I guess Exchange 2010 does.

    Read the article

  • Upcoming Customer WebCast: Adapters and JCA Transport in Oracle Service Bus 11g

    - by MariaSalzberger
    There is an upcoming webcast planned for September 19th that will show how to implement services using a JCA adapter in Oracle Service Bus 11g. The session will help to utilize existing resources like samples and information centers for adapters in the context of Oracle Service Bus. Topics covered in the webcast are: JCA Transport Overview / Inbound and Outbound scenarios using JCA adapters Implementation of an end-to-end use case using an inbound file adapter and and an outbound database adapter in Oracle Service Bus It will show how to find information on supported adapters in a certain version of OSB 11g Available adapter samples for OSB and SOA How to use SOA adapter samples for Oracle Service Bus A live demo of an adapter sample implementation in Oracle Service Bus Information Centers for adapters and Oracle Service Bus information The presentation recording can by found here after the webcast. Select "Oracle Fusion Middleware" as product. (https://support.oracle.com/rs?type=doc&id=740966.1) The schedule for future webcasts can be found in the above mentioned document as well.

    Read the article

  • Deactivate dead OCS 2007 R2 Edge Server?

    - by slashp
    I'm having a surprising issue where our old OCS 2007 R2 Edge server died of hardware failure (no backup) in the middle of our move to Lync. How can I forcefully remove the Edge server from the organization without being able to deactivate the role from the server itself? I've noticed the correct procedure for uninstalling OCS 2007 R2 is as follows: If you are removing an Edge Server, a Mediation Server, an Archiving Server, or a Monitoring Server, remove the Office Communications Server 2007 R2 components in the following sequence: Microsoft Office Communications Server 2007 R2 Edge Server Microsoft Office Communications Server 2007 R2 Mediation Server Microsoft Office Communications Server 2007 R2 Archiving Server Microsoft Office Communications Server 2007 R2 Monitoring Server Microsoft Office Communications Server 2007 R2 Core Components Microsoft Office Communications Server 2007 R2 Unified Communications Managed API 2.0 Core Redistribution package And to deactivate an Edge server: http://technet.microsoft.com/en-us/library/dd572832(v=office.13).aspx Any advice would be greatly appreciated.

    Read the article

  • Log centralization, display, transport and aggregation at scale v2

    - by Eric DANNIELOU
    This is a duplicate question of Log transport and aggregation at scale and http://stackoverflow.com/questions/1737693/whats-the-best-practice-for-centralised-logging, but the answers might differ now : The softwares described in 2009 may have changed since (for example Octopussy evolved from version 0.9 to 1.0.5). Rsyslog has become the default on most linux distro. Requirements have changed (security, software configuration management, ...). I'd like to ask the following questions : How do you centralize, display and archive system logs? How would you like to do it now if you had to? Most linux distro use rsyslog nowadays, which can provide reliable log transport. But some older unices, network devices and maybe windows box still use old udp rfc-style transport. How did you manage to get reliable transport? Storing logs for a few months can represent a huge amount of disk space. How do you store them? rdbms? Compressed and encrypted text files?

    Read the article

  • Lync Edge and Exchange Server: how to have access to my exchange mailbox from external network and also to the OWA

    - by Garcia Julien
    I've some problem in the configuration of Exchange 2010. My topology is like that: Server1 = Domain Controller Server2 = Exchange Server Server3 = Lync Server Server4 = Lync Edge Our public address (the one accessible by outside world) is directed to Server4. I would like to have access to my exchange mailbox from external network and also to the OWA. Could you help me in the configuration of thoses servers? Thank in advance Julien

    Read the article

  • What Triggers a syncronization between Exchange 2007 Mail Store and Edge Servers?

    - by BillN
    We are using Exchange 2007 for our mail. In our configuration, we need to add an alias to each users mailbox. When we do, the Edge server, another Exchange 2007 box, will reject the alias with a User Unknown error until the next morning. I seem to recall that in Exchange 2003, you could force an update from the Management console, but I can not find a way in 2007. It is obvious that a sync job is scheduled to run each night, but I cannot find it.

    Read the article

  • Executive Edge: It's the end of work as we know it

    - by Naresh Persaud
    If you are at Oracle Open World, it has been an exciting couple of days from Larry's keynote to the events at the Executive Edge. The CSO Summit was included as a program within the Executive Edge this year. The day started with a great presentation from Joel Brenner, author of "America The Vulnerable", as he discussed the impact of state sponsored espionage on businesses. The opportunity for every business is to turn security into a business advantage. As we enter an in-hospitable security climate, every business has to adapt to the security climate change.  Amit Jasuja's presentation focused on how customers can secure the new digital experience. As every sector of the economy transforms to adapt to changing global economic pressures, every business has to adapt. For IT organizations, the biggest transformation will involve cloud, mobile and social. Organizations that can get security right in the "new work order" will have an advantage. It is truly the end of work as we know it.  The "new work order" means working anytime and anywhere. The office is anywhere we want it to be because work is not a place it is an activity. Below is a copy of Amit Jasuja's presentation. Csooow12 amit-jasuja-securing-new-experience6 from OracleIDM

    Read the article

  • GDC 2012: The Bleeding Edge of Open Web Tech

    GDC 2012: The Bleeding Edge of Open Web Tech (Pre-recorded GDC content) Web browsers from mobile to desktop devices are in a constant state of growth enabling ever richer and pervasive games. This presentation by Google software engineer Vincent Scheib focuses on the latest developments in client side web technologies, such as Web Sockets, WebGL, File API, Mouse Lock, Gamepads, Web Audio API and more. Speaker: Vincent Scheib From: GoogleDevelopers Views: 1279 31 ratings Time: 48:33 More in Science & Technology

    Read the article

  • Double GPRS/EDGE speed with two mobile phones at once?

    - by Patrick
    Hi, I'm using my mobile phone to connect to the internet in an area where only GPRS/EDGE is available. To increase the connection speed I would like to use a technique called connection teaming. E.g. I would use two mobile phones / usb sticks to go online with different providers at the same time and let the software distribute requests over both connections. My questions are: is there a software available to do connection teaming? It sounds like Midpoint was able to do it but it's over 7 years old and is unlikely to run on Windows 7 has anybody tried this? Thanks a lot, Patrick

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >