Search Results

Search found 81445 results on 3258 pages for 'file command'.

Page 916/3258 | < Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >

  • Best way to implement a Rest API with PHP on Wamp web server

    - by DomingoSL
    Hello, i own a web server running windows (WAMP). I want to know the best way to implement a Rest API (a very simple one) in order to let a user do something. Diagram flow: I have programming skills, in fact, i developed some time ago a web server in VB6 who process the querys and when it find the command (http:/serverIP/webform.php?cmd=run&item=any) it do something, but know i really want to develop a solution using the WAMP server. Some people consider the solution of executing a exe when a command is detected a bad solution for security issues, but this specific proyect i have is for the use of only some people (trusted people) who dont have intentions of hacking the server. So, what do you think? Remember: Its not a public API, its for some people and some programs who will use the API Its a very simple one, only one command using POST or GET. Thanks

    Read the article

  • Perl open call failing.

    - by benjamin button
    I am new to perl coding. I am facing a problem while executing a small script i have: open is not able to find the file which i am giving as an argument.Please see below: File is available: ls -l DLmissing_months.sql -rwxr-xr-x 1 tlmwrk61 aimsys 2842 May 16 09:44 DLmissing_months.sql My perl script: #!/usr/local/bin/perl use strict; use warnings; my $this_line = ""; my $do_next = 0; my $file_name = $ARGV[0]; open( my $fh, '<', '$file_name') or die "Error opening file - $!\n"; close($fh); executing the perl script : > new.pl DLmissing_months.sql Error opening file - No such file or directory what is the problem with my perl script.

    Read the article

  • PATH and CLASSPATH in Windows7 7 / Eclipse

    - by Richard Knop
    So I would like to set PATH and CLASSPATH system variables so I can use javac and java commands in the command line. I can just compile and run java programs in eclipse but I would also like to be able to run them through command line. This is where I have Java installed: C:\Program Files (x86)\Java jdk1.6.0_20 jre6 And this is where eclipse stores my Java projects: D:\java-projects HelloWorld bin HelloWorld.class src HelloWorld.java I have set up the PATH and CLASSPATH variables like this: PATH: C:\Program Files (x86)\Java\jdk1.6.0_20\bin CLASSPATH: D:\java-projects But it doesn't work. When I write: java HelloWorld Or: java HelloWorld.class I get error like this: Exception in thread “main” java.lang.NoClassDefFoundError: HelloWorld The error is longer, that's just the first line. How can I fix this? I'm mainly interested to be able to run compiled .class programs from the command line, I can do compiling in the eclipse.

    Read the article

  • Serial port determinism

    - by Matt Green
    This seems like a simple question, but it is difficult to search for. I need to interface with a device over the serial port. In the event my program (or another) does not finish writing a command to the device, how do I ensure the next run of the program can successfully send a command? Example: The foo program runs and begins writing "A_VERY_LONG_COMMAND" The user terminates the program, but the program has only written, "A_VERY" The user runs the program again, and the command is resent. Except, the device sees "A_VERYA_VERY_LONG_COMMAND," which isn't what we want. Is there any way to make this more deterministic? Serial port programming feels very out-of-control due to issues like this.

    Read the article

  • Cap deploy doesn't work all the sudden; something to do with FactoryGirl and assets

    - by Jason Swett
    I've been cap deploying my app all throughout it development, and this last time I tried to deploy it, it didn't work. Here's what happened: * executing `deploy:assets:precompile' * executing "cd /var/www/oneteam/releases/20121006153136 && bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile" servers: ["electricsasquatch.com"] [electricsasquatch.com] executing command ** [out :: electricsasquatch.com] rake aborted! ** [out :: electricsasquatch.com] uninitialized constant OneTeam::Application::FactoryGirl ** [out :: electricsasquatch.com] ** [out :: electricsasquatch.com] (See full trace by running task with --trace) It looks like it failed on the deploy:assets:precompile command. I don't get why that command would have tried to do anything with FactoryGirl, though. Any ideas?

    Read the article

  • INNER JOIN vs LEFT JOIN performance in SQL Server

    - by Ekkapop
    I've created SQL command that use INNER JOIN for 9 tables, anyway this command take a very long time (more than five minutes). So my folk suggest me to change INNER JOIN to LEFT JOIN because the performance of LEFT JOIN is better, at first time its despite what I know. After I changed, the speed of query is significantly improve. I want to know why LEFT JOIN is faster than INNER JOIN? My SQL command look like below: SELECT * FROM A INNER JOIN B ON ... INNER JOIN C ON ... INNER JOIN D and so no

    Read the article

  • CommandManager Executed Events don't fire for custom ICommands

    - by Andre Luus
    The WPF CommandManager allows you to do the following (pseudo-ish-code): <Button Name="SomeButton" Command="{Binding Path=ViewModelCommand}"/> And in the code-behind: private void InitCommandEvents() { CommandManager.AddExecutedEventHandler(this.SomeButton, SomeEventHandler); } The SomeEventHandler never gets called. To me this didn't seem like something very wrong to try and do, but if you consider what happens inside CommandManager.AddExecutedEventHandler, it makes sense why it doesn't. Add to that the fact that the documentation clearly states that the CommandManager only works with RoutedCommands. Nonetheless, this had me very frustrated for a while and led me to this question: What would you suggest is the best workaround for the fact that the CommandManager does not support custom ICommands? Especially if you want to add behavior around the execution of a command? For now, I fire the command manually in code behind from the button click event.

    Read the article

  • django app organization

    - by iHeartDucks
    I have been reading some django tutorial and it seems like all the view functions have to go in a file called "views.py" and all the models go in "models.py". I fear that I might end up with a lot of view functions in my view.py file and the same is the case with models.py. Is my understanding of django apps correct? Django apps lets us separate common functionality into different apps and keep the file size of views and models to a minimum? For example: My project can contain an app for recipes (create, update, view, and search) and a friend app, the comments app, and so on. Can I still move some of my view functions to a different file? So I only have the CRUD in one single file?

    Read the article

  • Make two servers talk to each other

    - by Maksim
    I have application written in GWT and hosted on Google AppEngine/Java. In this application user will have an option to upload video/audio/text file to the server. Those files could be big, up to 1gb or so and because GAE/J does not support large file I have to use another server to store those files. This would be easy to implement if there was no cross-domain security feature in browsers. So, what I'm thinking is to make GAE Server talk to my server (Glassfish or any other java servers if needed) to tell url to the file and if possible send status of uploaded file (how many percent was uploaded) so I can show status on clients screen. Here is what I'm thinking to do. When user loads GWT page that is stored on GAE/J he/she will upload file to my server, then my server will send response back to GAE and GAE will send response to the client. If this scenario is possible what would be the best way to implement GAE to Glassfish conversation?

    Read the article

  • If/else isn't working properly

    - by luckytaxi
    I have a validation function I'm using inside of codeigniter. function valid_image() { if ( ($_FILES["file"]["type"] != "image/jpeg") || ($_FILES["file"]["type"] != "image/gif") ) { $this->form_validation->set_message('valid_image', 'Wrong file type..'); return false; } else { return true; } With just the "image/jpeg" part in the if statement it works fine. If I try to upload anything other than a jpg file it fails. If I run the code above, it fails with both a jpg or a gif file. And before someone says "why not use the upload class," I can't. I'm saving my pics directly into MongoDB, so the upload class doesn't help much.

    Read the article

  • How to verify mail origin?

    - by MrZombie
    I wish to code a little service where I will be able to send an e-mail to a specific address used by my server to send specific commands to my server. I'll check against a list of permitted e-mail addresses to make sure no one unauthorized will send a command to the server, but how do I make sure that, say, an e-mail sent by "[email protected]" really comes from "thezombie.net"? I thought about checking the header for the original e-mail server's IP and pinging the domain to make sure it is the same, but would that be reliable? Example: Server receives a command from [email protected] [email protected] is authorized, proceed with checks Server checks "thezombie.net"'s IP from the header: W.X.Y.Z Server pings "thezombie.net" for it's IP: A.B.C.D The IPs do not correspond, do not process command Is there any better way to do that?

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • Does Process.StartInfo.Arguments support a UTF-8 string?

    - by Patrick Klug
    Can you use a UTF-8 string as the Arguments for a StartInfo? I am trying to pass a UTF-8 (in this case a Japanese string) to an application as a console argument. Something like this (this is just an example! (cmd.exe would be a custom app)) var process = new System.Diagnostics.Process(); process.StartInfo.Arguments = "/K \"echo ????????\""; process.StartInfo.FileName = "cmd.exe"; process.StartInfo.UseShellExecute = true; process.Start(); process.WaitForExit(); Executing this seems to loose the UTF-8 string and all the target application sees is "echo ?????????" When executing this command directly on the command line (by pasting the arguments) the target application receives the string correctly even though the command line itself doesn't seem to display it correctly. Do I need to do anything special to enable UTF-8 support in the arguments or is this just not supported?

    Read the article

  • How to get a non-XML output using JDOM XSLTransformer?

    - by Neil McF
    Hello, I have an XML file which I'd like to parse into a non-XML (text) file based on a XLST file. The code in both seem correct, and it works when testing manually, but I'm having a problem doing this programatically. I'm using JDOM's XSLTransformer class to apply the XSLT to the XML and it returns it in the format of a JDOM Document. The problem here is that I can't seem to access anything in the Document as it is not a proper XML file and I get a "java.lang.IllegalStateException: Root element not set" error. Is there a better way within Java to obtain a non-XML file as a result of XSLT?

    Read the article

  • acessing network shared folder with a username and password string in vb.net

    - by Irene
    i am using the following code to read the details from a network folder which is restricted for only one user shell("net use q: \\serveryname\foldername /user:admin pwrd", AppWinStyle.Hide, True, 10000) Process.Start(path) shell("net use q: /delete") when i run this to open any pdf or jpg or any other files except word/excel/powerpoint, everything is working fine. but the problem comes only when i access a word file. in the step one, i am giving permission to access the word file. in the step two, word file is open. in the third, i am deleting the q drive. the problem is the word file is still open. so i am getting a dos window, saying that "some connections of still connected or searching some folders, do you want to force disconnect" please help.... how to access a word file (editable files) providing user name and password from the code and at the same time he shoud not have access to any other folders directly.

    Read the article

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • Logging Mechanism using memory mapping technique

    - by Tushar
    Just create a mapping of the file of the required size (CreateFileMapping or mmap), write the lines in the buffer and start over when the maximum number is reached. -- Your answer for write-a-circular-file-in-c. I am also writing the LogWriter module. In this caase i am mapping the whole file to the memory using mmap(). I am maintaining the Read and Write pointers.I want to write the log to the file in append mode. Then when logger service is started first time it writes it appends the logs. But when system gets shutdown next time when i run the service it doesn't append the data at the end. I want to maintain the write and read offsets even if system shut down.How to achieve this ..? How to find the how much data is written to the log file. ??

    Read the article

  • using macro defined in header files

    - by Neeraj
    I have a macro definition in header file like this: // header.h ARRAY_SZ(a) = ((int) sizeof(a)/sizeof(a[0])); This is defined in some header file, which includes some more header files. Now, i need to use this macro in some source file that has no other reason to include header.h or any other header files included in header.h, so should i redefine the macro in my source file or simply include the header file header.h. Will the latter approach affect the code size/compile time (I think yes), or runtime (i think no)? Your advice on this!

    Read the article

  • Parameter for BPEL process

    - by Hubidubi
    Hi I use OpenESB + BPEL. I would like to use some parameter to set system specific settings (path, string constants, etc.). I tried to use a properties file that a simple java class should read up and use with this method (http://wiki.open-esb.java.net/Wiki.jsp?page=BPELSEHowToCallJavaMethods). The problem is that I can't create properties file in BPEL project (not supported). So I created a file by hand. But this file is not included in the deployed app. Is there any working solution for including property file or is there any other method to set parameters on BPEL process? Thanks, Hubidubi

    Read the article

  • Mercurial in Windows doesn't see .hgignore - why?

    - by AP257
    Windows fails to pick up my .hgignore file. I'm running Mercurial from the command line, and "hg status" shows lots of files in the ignored directories. The .hgignore file looks like this (there's no whitespace at the start of the file, or at the start of each line). I've put it in the root directory of the repository. \.pyc$ \.pyo$ \.DS_Store \.Python \.installed.cfg ^bin$ ^build$ ^develop-eggs$ ^eggs$ ^include$ ^lib$ ^parts$ ^pip-log.txt$ ^web/localsettings.py$ I've tried saving the file in ANSI and UTF-8, and it doesn't seem to make a difference. I know the file is working OK on Linux, is there anything different about the paths in Windows?

    Read the article

  • How can I redirect the output of Perl's system() to a filehandle?

    - by syker
    With the open command in Perl, you can use a filehandle. However I have trouble getting back the exit code with the open command in Perl. With the system command in Perl, I can get back the exit code of the program I'm running. However I want to just redirect the STDOUT to some filehandle (no stderr). My stdout is going to be a line-by-line output of key-value pairs that I want to insert into a mao in perl. That is why I want to redirect only my stdout from my Java program in perl. Is that possible? Note: If I get errors, the errors get printed to stderr. One possibility is to check if anything gets printed to stderr so that I can quite the Perl script.

    Read the article

  • BizTalk: Internals: the Partner Direct Ports and the Orchestration Chains

    - by Leonid Ganeline
    Partner Direct Port is one of the BizTalk hidden gems. It opens simple ways to the several messaging patterns. This article based on the Kevin Lam’s blog article. The article is pretty detailed but it still leaves several unclear pieces. So I have created a sample and will show how it works from different perspectives. Requirements We should create an orchestration chain where the messages should be routed from the first stage to the second stage. The messages should not be modified. All messages has the same message type. Common artifacts Source code can be downloaded here. It is interesting but all orchestrations use only one port type. It is possible because all ports are one-way ports and use only one operation. I have added a B orchestration. It helps to test the sample, showing all test messages in channel. The Receive shape Filter is empty. A Receive Port (R_Shema1Direct) is a plain Direct Port. As you can see, a subscription expression of this direct port has only one part, the MessageType for our test schema: A Filer is empty but, as you know, a link from the Receive shape to the Port creates this MessageType expression. I use only one Physical Receive File port to send a message to all processes. Each orchestration outputs a Trace.WriteLine(“<Orchestration Name>”). Forward Binding This sample has three orchestrations: A_1, A_21 and A_22. A_1 is a sender, A_21 and A_22 are receivers. Here is a subscription of the A_1 orchestration: It has two parts A MessageType. The same was for the B orchestration. A ReceivePortID. There was no such parameter for the B orchestration. It was created because I have bound the orchestration port with Physical Receive File port. This binding means the PortID parameter is added to the subscription. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for the sending orchestration, A_1. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, receive orchestrations, A_21 or A_22, but it is A_1 orchestration and S_SentFromA_1 port. Now we have to choose a Partner Orchestration parameter for the received orchestrations, A_21 and A_22. Nothing strange is here except a parameter name. We choose the port of the sender, A_1 orchestration and S_SentFromA_1 port. As you can see the Partner Orchestration parameter for the sender and receiver orchestrations is the same. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B and by A_1 A_1 sent a message forward. A message was received by B, A_21, A_22 Let’s look at a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports.     Now let’s see a subscription of the A_21 and A_22 orchestrations. Now it makes sense. That’s why we have chosen such a strange value for the Partner Orchestration parameter of the sending orchestration. Inverse Binding This sample has three orchestrations: A_11, A_12 and A_2. A_11 and A_12 are senders, A_2 is receiver. How to set up the ports? All ports involved in the message exchange should be the same port type. It forces us to use the same operation and the same message type for the bound ports. This step as absolutely contra-intuitive. We have to choose a Partner Orchestration parameter for a receiving orchestration, A_2. The first strange thing is it is not a partner orchestration we have to choose but an orchestration port. But the most strange thing is we have to choose exactly this orchestration and exactly this port.It is not a port from the partner, sent orchestrations, A_11 or A_12, but it is A_2 orchestration and R_SentToA_2 port. Now we have to choose a Partner Orchestration parameter for the sending orchestrations, A_11 and A_12. Nothing strange is here except a parameter name. We choose the port of the sender, A_2 orchestration and R_SentToA_2 port. Testing I dropped a test file in a file folder. There we go: A dropped file was received by B, A_11 and by A_12 A_11 and A_12 sent two messages forward. The messages were received by B, A_2 Let’s see what was a context of a message sent by A_1 on the second step: A MessageType part. It is quite expected. A PartnerService, a ParnerPort, an Operation. All those parameters were set up in the Partner Orchestration parameter on both bound ports. Here is a subscription of the A_2 orchestration. Models I had a hard time trying to explain the Partner Direct Ports in simple terms. I have finished with this model: Forward Binding Receivers know a Sender. Sender doesn’t know Receivers. Publishers know a Subscriber. Subscriber doesn’t know Publishers. 1 –> 1 1 –> M Inverse Binding Senders know a Receiver. Receiver doesn’t know Senders. Subscribers know a Publisher. Publisher doesn’t know Subscribers. 1 –> 1 M –> 1 Notes   Orchestration chain It’s worth to note, the Partner Direct Port Binding creates a chain opened from one side and closed from another. The Forward Binding: A new Receiver can be added at run-time. The Sender can not be changed without design-time changes in Receivers. The Inverse Binding: A new Sender can be added at run-time. The Receiver can not be changed without design-time changes in Senders.

    Read the article

  • Connection Timeout exception for a query using ADO.Net

    - by dragon
    Update: Looks like the query does not throw any timeout. The connection is timing out. This is a sample code for executing a query. Sometimes, while executing time consuming queries, it throws a timeout exception. I cannot use any of these techniques: 1) Increase timeout. 2) Run it asynchronously with a callback. This needs to run in a synchronous manner. please suggest any other techinques to keep the connection alive while executing a time consuming query? private static void CreateCommand(string queryString, string connectionString) { using (SqlConnection connection = new SqlConnection( connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); command.Connection.Open(); command.ExecuteNonQuery(); } }

    Read the article

  • Why doesn't Inno Setup compiler set the version info correctly from hudson?

    - by Tim
    If I run Inno Setup compiler from a command line/batch file it creates an exe with the version information in the file name. However, when I run from hudson (same command line) I don't get the version information. Perhaps I am missing something. Is this a known issue? This is the way I am doing it in the iss script file. #define FileVerStr GetFileVersion(SrcApp) EDIT: The env vars are all set for all users - not just my login - so the service has access to everything that the command line build does. EDIT: See my answer for a resolution of this.

    Read the article

  • I keep Getting KeyError: 'tried' Whenever I Tried to Run Django Dev Server from Remote Machine

    - by Spikie
    I am running django on python2.6.1, and did start the django web server like this manage.py runserver 192.0.0.1:8000 then tried to connect to the django dev web server on http://192.0.0.1:8000/ keep getting this message on the remote computer Traceback (most recent call last): File "C:\Python26\Lib\site-packages\django\core\servers\basehttp.py", line 279, in run self.result = application(self.environ, self.start_response) File "C:\Python26\Lib\site-packages\django\core\servers\basehttp.py", line 651, in call return self.application(environ, start_response) File "C:\Python26\lib\site-packages\django\core\handlers\wsgi.py", line 241, in call response = self.get_response(request) File "C:\Python26\lib\site-packages\django\core\handlers\base.py", line 115, in get_response return debug.technical_404_response(request, e) File "C:\Python26\Lib\site-packages\django\views\debug.py", line 247, in technical_404_response tried = exception.args[0]['tried'] KeyError: 'tried' what i am doing wrong ? it seen to work ok if i run http://192.0.0.1:8000/ on the computer that runs the Django web server and have that ip 192.0.0.1:8000

    Read the article

< Previous Page | 912 913 914 915 916 917 918 919 920 921 922 923  | Next Page >