Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 698/792 | < Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >

  • Ajax call to parent window after form submission

    - by David
    Hi all, Pardon the complicated title. Here's my situation: I'm working on a Grails app, and using jQuery for some of the more complex UI stuff. The way the system is set up, I have an item, which can have various files (user-supplied) associated with it. On my Item/show view, there is a link to add a file. This link pops up a jQuery modal dialog, which displays my file upload form (a remote .gsp). So, the user selects the file and enters a comment, and when the form is submitted, the dialog gets closed, and the list of files on the Item/show view is refreshed. I was initially accomplishing this by adding onclick="javascript:window.parent.$('#myDialog').dialog('close');" to my submit button. This worked fine, but when submitting some larger files, I end up with a race condition where the dialog closes and the file list is refreshed before the new file is saved, and so the list of files is out of date (the file still gets saved properly). So my question is, what is the best way to ensure that the dialog is not closed until after the form submit operation completes? I've tried using the <g:formRemote tag in Grails, and closing the dialog in the "after" attribute (according to the Grails docs, the script is called after form submission), but I receive an error (taken from FireBug) stating that window.parent.$('#myDialog').dialog is not a function Is this a simple JavaScript/Grails syntax issue that I'm missing here, or am I going about this entirely wrong? Thanks so much for your time and assistance!

    Read the article

  • How to handle custom Java exception in Flex app.

    - by mico
    Hello, we are using BlazeDS as a proxy between Flex and Java. The approach is the same as in (http://www.flexpasta.com/index.php/2008/05/16/exception-handling-with-blazeds-and-flex/) Java exception declaration: public class FlexException extends RuntimeException { private String name = 'John'; public FlexException(String message) { super(message); } public String getName() { return name; } } Then, we are throwing it: public void testMethod(String str) throws Exception { throw new FlexException("Custom exception"); } Flex part: private function faultHandler(event:FaultEvent):void { var errorMessage:ErrorMessage = event.message as ErrorMessage; trace("error++"); } and remote object is instantiated here: <mx:RemoteObject id="mySample" destination="mySample" channelSet="{cs1}" fault="faultHandler(event)" /> But in event.fault I get "Server.Processing" and event.faultString equals "There was an unhandled failure on the server. Custom exception" How can I receive the data is specified in exception props ? BlazeDS log is similar to the log that was mentioned in the comment [BlazeDS] 11:28:13.906 [DEBUG] Serializing AMF/HTTP response Version: 3 (Message #0 targetURI=/2/onStatus, responseUR|-) (Typed Object #0 ‘flex.messaging.messages.ErrorMessage’) headers = (Object #1) rootCause = null body = null correlationId = “2F1126D7-5658-BE40-E27C-7B43F3C5DCDD” faultDetail = null faultString = “Login required before authorization can proceed.” clientId = “C4F0E77C-3208-ECDD-1497-B8D070884830? timeToLive = 0.0 destination = “books” timestamp = 1.204658893906E12 extendedData = null faultCode = “Client.Authentication” messageId = “C4F0E77C-321E-6FCE-E17D-D9F1C16600A8? So the quesion is why rootClause is null? How can I get that Exception object not just a string 'Custom exception'?

    Read the article

  • Service reference addition issue in visual studio 2010

    - by user293072
    I am currently working on an application that allows reverse geocoding using silverlight + bing maps. The thing is that I want to add a reference to the reverse geocoding service provided in msdn ( http://msdn.microsoft.com/en-us/library/cc879136.aspx) i.e. http:// dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl, but when I try to get a reference in vs2010, I get the following error: The document at the url http:// dev.virtualearth.net/webservices/v1/metadata/geocodeservice/geocodeservice.wsdl was not recognized as a known document type. The error message from each known type may help you fix the problem: Report from 'XML Schema' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'DISCO Document' is ''', hexadecimal value 0x1F, is an invalid character. Line 1, position 1.'. Report from 'WSDL Document' is 'There is an error in XML document (1, 1).'. '', hexadecimal value 0x1F, is an invalid character. Line 1, position 1. Metadata contains a reference that cannot be resolved: 'http://dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl'. Content Type application/soap+xml; charset=utf-8 was not supported by service http: //dev.virtualearth.net/webservices/v1/geocodeservice/geocodeservice.svc?wsdl. The client and service bindings may be mismatched. The remote server returned an error: (415) Unsupported Media Type. If the service is defined in the current solution, try building the solution and adding the service reference again. It is good to mention that I can access the service URL from the browser (with a no style information warning). I am aware that there are other reverse geolocoding services out there, but I am somewhat forced by certain circumstances to use only Microsoft-related components/services. Please help :)

    Read the article

  • XML String into a DataGridView (C#)

    - by Justin Daniels
    I am currently working with a webservice to pull a report about users in a remote support system. After pulling my report and receiving the result, I am given the following string back by the method: <report><header><field id="0">Source</field><field id="1">Session ID</field><field id="2">Date</field><field id="3">Name</field><field id="24">Technician Name</field><field id="25">Technician ID</field></header><data><row><field id="0">Email</field><field id="1">55037806</field><field id="2">4/13/2010 2:28:06 AM</field><field id="3">Bill Gates</field><field id="24">John</field><field id="25">1821852</field></row><row><field id="0">Telephone</field><field id="1">55034548</field><field id="2">4/13/2010 12:59:44 AM</field><field id="3">Steve Jobs</field><field id="24">John</field><field id="25">1821852</field></row></data></report> After receiving this string, I need to take it and display the actual data in a datagridview. I've tried putting it into an XMLDocument then reading that, but it seems to keep failing. Just interested in another set of eyes :) Application is written in C# in VS2010.

    Read the article

  • automated email downloading and treading similar messages

    - by Michael
    Okay here it is : I have built an c# console app that downloads email, save attachments , and stores the subject, from, to, body to a MS SQL Database. I use aspNetPOP3 Component to do this. I have build a front end ASP.NET application to search and view the messages. Works great. Next Steps (this is where I need help ): Now I want my users (of the asp.net app) to reply to this message send the email to the originator, and tread any additional replies back and forth on from that original message(like basecamp). This would allow my end user not to have to log-in to a system, they just continue using email (our users can as well). The question is what should I use to determine if messages are related? Subject line I think is a bad approach. I believe the best method i've seen so far is way basecamp does it, but I'm not sure how that is done, here is a real example of the reply to address from a basecamp email (I've changed the host name): [email protected] Basecamp obviously are prefixing the pre-pending a tracking id to the email address, however , when I try this with my mail service, it's rejected. Is this the best approach, is there a way I can accomplish this, is there a better approach, or even a better email component tool? Thanks, Mike

    Read the article

  • SL 3 navigation not working!

    - by Silver Gun
    Hey Guys I converted all my existing Silverlight app UserControls to Pages so I could use the Navigation Framework. Anyway so I created a UserControl called MyFrame, which would host all the pages. In my App.xaml.cs I have the following to make sure that MyFrame is loaded when the App loads: private void Application_Startup(object sender, StartupEventArgs e) { this.RootVisual = new MyFrame(); } My UriMapper class resides in App.xaml and looks like the following: <navcore:UriMapper x:Key="uriMapper"> <navcore:UriMapping Uri="Login" MappedUri="Login.xaml"> </navcore:UriMapper> Within my 'MyFrame' class, I have the following <StackPanel Orientation="Horizontal"> <StackPanel Orientation="Vertical"> <HyperlinkButton Tag="Login" Content="Login" Click="HyperlinkButton_Click" /> </StackPanel> <StackPanel Orientation="Vertical"> <navigation:Frame x:Name="ContentFrame" Style="{StaticResource ContentFrameStyle}" /> </StackPanel> </StackPanel> And within the callback for my HyperlinkButton's event handler, I have the following: private void HyperlinkButton_Click(object sender, RoutedEventArgs e) { ContentFrame.Navigate(new Uri((sender as HyperlinkButton).Tag.ToString(), UriKind.Relative)); } The Login.xaml file is in my root folder (right under Project). This navigation does not seem to work! The exception I get reads like so: Navigation is only supported to relative URIs that are fragments, or begin with '/', or which contain ';component/'. Parameter name: uri The Login page does not load. There is no problem with Login.xaml as when I set this.RootVisual = new Login(); the page loads just fine. I also tried setting the NavigateUri attribute of the HyperlinkButton to "Login." No cigar. I'll appreciate any help! Thanks a lot in advance

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

  • TLS with SNI in Java clients

    - by ftrotter
    There is an ongoing discussion on the security and trust working group for NHIN Direct regarding the IP-to-domain mapping problem that is created with traditional SSL. If an HISP (as defined by NHIN Direct) wants to host thousands of NHIN Direct "Health Domains" for providers, then it will an "artificially inflated cost" to have to purchase an IP for each of those domains. Because Apache and OpenSSL have recently released TLS with support for the SNI extension, it is possible to use SNI as a solution to this problem on the server side. However, if we decide that we will allow server implementations of the NHINDirect transport layer to support TLS+SNI, then we must require that all clients support SNI too. OpenSSL based clients should do this by default and one could always us stunnel to implement an TLS+SNI aware client to proxy if your given programming language SSL implementation does not support SNI. It appears that native Java applications using OpenJDK do not yet support SNI, but I cannot get a straight answer out of that project. I know that there are OpenSSL Java libraries available but I have no idea if that would be considered viable. Can you give me a "state of the art" summary of where TLS+SNI support is for Java clients? I need a Java implementers perspective on this.

    Read the article

  • HttpModule.Init - safely add HttpApplication.BeginRequest handler in IIS7 integrated mode

    - by Paul Smith
    My question is similar but not identical to: http://stackoverflow.com/questions/1123741/why-cant-my-host-softsyshosting-com-support-beginrequest-and-endrequest-event (I've also read the mvolo blog referenced therein) The goal is to successfully hook HttpApplication.BeginRequest in the IHttpModule.Init event (or anywhere internal to the module), using a normal HttpModule integrated via the system.webServer config, i.e. one that doesn't: invade Global.asax or override the HttpApplication (the module is intended to be self-contained & reusable, so e.g. I have a config like this): <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="TheHttpModule" /> <add name="TheHttpModule" type="Company.HttpModules.TheHttpModule" preCondition="managedHandler" /> So far, any strategy I've tried to attach a listener to HttpApplication.BeginRequest results in one of two things, symptom 1 is that BeginRequest never fires, or symptom 2 is that the following exception gets thrown on all managed requests, and I cannot catch & handle it from user code: Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] System.Web.PipelineModuleStepContainer.GetEventCount(RequestNotification notification, Boolean isPostEvent) +30 System.Web.PipelineStepManager.ResumeSteps(Exception error) +1112 System.Web.HttpApplication.BeginProcessRequestNotification(HttpContext context, AsyncCallback cb) +113 System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +616 Commenting out app.BeginRequest += new EventHandler(this.OnBeginRequest) in Init stops the exception of course. Init does not reference the Context or Request objects at all. I have tried: Removed all references to HttpContext.Current anywhere in the project (still symptom 1) Tested removing all code from the body of my OnBeginRequest method, to ensure the problem wasn't internal to the method (= exception) Sniffing the stack trace and only calling app.BeginRequest+=... when if the stack isn't started by InitializeApplication (= BeginRequest not firing) Only calling app.BeginRequest+= on the second pass through Init (= BeginRequest not firing) Anyone know of a good approach? Is there some indirect strategy for hooking Application_Start within the module (seems unlikely)? Another event which a) one can hook from a module's constructor or Init method, and b) which is subsequently a safe place to attach BeginRequest event handlers? Thanks much

    Read the article

  • Running multiple instances of tomcat in eclipse WTP

    - by lisak
    Hey, SCENARIO: 10 CATALINA_BASEs with own configuration (always the same port numbers 8080, but 10 different IP/hostnames on one host via virtual IPs). created a server in WTP and pick "Use the custom location" option in the server configuration in eclipse. New configuration files are created in workspace/Server/server-name-config/ Set up the server path and deploy path for my catalina base (not the internal .metadata one) After I started it, the new configuration files overwrote the original catalina-base/conf files I had there - I was glad, it should be like this but after I made changes in the eclipse config files workspace/Server/server-name-config/ and restarted the server, the changes didn't appeared in the original files in CATALINA_BASE/conf What the hell is that ? So I set the CATALINA_BASE/conf/server.xml to fault configuration and restarted tomcat from eclipse and it worked ! it took the configuration from /Server/server-name-config/server.xml Then I deleted CATALINA_BASE/conf/server.xml and it said that there is no server.xml in catalina base ! How is it possible ? I don't understand why eclipse WTP developers made so tight integration. There should by just symbolic links in /Server/server-name-config/ pointing to CATALINA_BASE/conf/ ... now there is a weird system which is totally unpredictable. The changes in /Server/server-name-config/ are not reflected in CATALINA_BASE/conf ... from where the standard bootstrap.jar or other catalina classloaders and classes build server, engine and other objects with particular setting. Moreover the CATALINA_BASEs could be used outside eclipse then. The second problem, I'm setting up various things in CATALINA_BASE/bin/startup.sh and setenv.sh which is easy cause I can use bash for it. Is then modifying VM parameters in the "Open launch configuration" settings the only way how to do it in eclipse ? Sorry for such a huge pile of questions, but I'm annoyed by the fact that it is much better to not use eclipse WTP for this because it is very poorly designed and it's a shame because this would spare me a lot of time. And using the internal .metadata/ instances it's even more terrifying way that the one I described.

    Read the article

  • php mailer char-coding problem

    - by Holian
    Hello! I try to use Phpmailer to send registration, activation..etc mail to users... require("class.phpmailer.php"); $mail -> charSet = "UTF-8"; $mail = new PHPMailer(); $mail->IsSMTP(); $mail->Host = "smtp.mydomain.org"; $mail->From = "[email protected]"; $mail->SMTPAuth = true; $mail->Username ="username"; $mail->Password="passw"; //$mail->FromName = $header; $mail->FromName = mb_convert_encoding($header, "UTF-8", "auto"); $mail->AddAddress($emladd); $mail->AddAddress("[email protected]"); $mail->AddBCC('[email protected]', 'firstadd'); $mail->Subject = $sub; $mail->Body = $message; $mail->WordWrap = 50; if(!$mail->Send()) { echo 'Message was not sent.'; echo 'Mailer error: ' . $mail->ErrorInfo; } The $message is contain latin characters. Unfortunatelly all webmail (gmail, webmail.mydomain.org, emailaddress.domain.xx) use different coding. How can i force to use UTF-8 coding to show my mail exactly same on all mailbox? I try to convert the mail header width mb_convert_encoding(), but with no luck. Thank you.

    Read the article

  • How do I debug a HTTP 502 error?

    - by Bialecki
    I have a Python Tornado server sitting behind a nginx frontend. Every now and then, but not every time, I get a 502 error. I look in the nginx access log and I see this: 127.0.0.1 - - [02/Jun/2010:18:04:02 -0400] "POST /a/question/updates HTTP/1.1" 502 173 "http://localhost/tagged/python" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3" and in the error log: 2010/06/02 18:04:02 [error] 14033#0: *1700 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: _, request: "POST /a/question/updates HTTP/1.1", upstream: "http://127.0.0.1:8888/a/question/updates", host: "localhost", referrer: "http://localhost/tagged/python" I don't think any errors show up in the Tornado log. How would you go about debugging this? Is there something I can put in the Tornado or nginx configuration to help debug this? EDIT: In addition, I get a fair number of 504, gateway timeout errors. Is it possible that the Tornado instance is just busy or something?

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • TeamCity and pending Git merge branch commit keeps build with failed tests

    - by Vladimir
    We use TeamCity for continuous integration and Git for source control. Generally it works pretty well - convenient, modern and good us quick feedback when tests fails. There is a strange behavior related to Git merge specifics. Here are steps of the case: First developer pulls from master repo. Second developer pulls from master repo. First developer makes commit A locally. Second developer makes commit B locally; Second developer pushes commit B. First developer want to push commit A but unable because he have to pull commit B first. First developer pull's from remote reposity. First developer pushes commit A and generated merge branch commit. The history of commits in master repo is following: B second developer A first developer merge branch first developer. Now let's assume that Second Developer fixed some failing tests in his commit B. What TeamCity will do is following: Commit B arrives - TeamCity makes build #1 with all tests passed Commit A arrives - TeamCity makes build #2 (without commit B) test bar becomes Red! TeamCity thought that Pending "Merge Branch" commit doesn't contain any changes (any new files) - but it actually does contain the merge of commit B, so the TeamCity don't want to make new build here and make tests green. Here are two problems: 1. In our case we have failed tests returning back in second commit (commit A) 2. TeamCity don't want to make a new build and make tests back green. Does anybody know how to fix both of this problems. I consider some reasonable general approach.

    Read the article

  • Trying to compile x264 and ffmpeg for iPhone - "missing required architecture arm in file"

    - by jtrim
    I'm trying to compile x264 for use in an iPhone application. I see there are instructions on how to compile ffmpeg for use on the platform here: http://lists.mplayerhq.hu/pipermail/ffmpeg-devel/2009-October/076618.html , but I can't seem to find anything this complete for compiling x264 on the iPhone. I've found this source tree: http://gitorious.org/x264-arm that seems to have support for the ARM platform. Here is my config line: ./configure --cross-prefix=/usr/bin/ --host=arm-apple-darwin10 --extra-cflags="-B /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.2.sdk/usr/lib/ -I /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.2.sdk/usr/lib/" ...and inside configure I'm using the gas-preprocessor script (first link above) as my assembler: gas-preprocessor.pl gcc When I start compiling, it chunks away for a little while, then it spits out these warnings and a huge list of undefined symbols: ld: warning: option -s is obsolete and being ignored ld: warning: -force_cpusubtype_ALL will become unsupported for ARM architectures ld: warning: in /usr/lib/crt1.o, missing required architecture arm in file ld: warning: in /usr/X11R6/lib/libX11.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libm.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libpthread.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libgcc_s.1.dylib, missing required architecture arm in file ld: warning: in /usr/lib/libSystem.dylib, missing required architecture arm in file Undefined symbols: My guess would be that the problem has to do with the "missing required architecture arm in file" warning...any ideas?

    Read the article

  • pylons on production server fedora 8

    - by stormdrain
    I'm interested in learning some python, and thought Pylons would be a good starting point (after spending 2 days trying to get django working -- to no avail). I have an Amazon EC2 instance with Fedora 8 on it. It is a bare-bones install. I am halfway through my second day of trying to get it to work. I have mod_wsgi installed. I have Apache (though that's a later task to tackle). I have easy_install, paster is working fine; basically all of the pre-requisites mentioned throughout the Pylons docs. I can't for the life of me get the thing to work. And I can't seem to find a coherent walkthough anywhere that lists all the steps necessary. There is tons of info out there, but it is all scattered. Wsgi this, python that. Google, google, google... "47 million results found for 'socket.error:(lol, 'Yous a goofs')". So, this is my latest attempt: apachectl -k stop cd /home/ paster create -t pylons test [blah blah.. ok] cd test nano development.ini [hmm, last time I changed the host from 127.0.0.1 to my domain name or url, it threw an error like socket.error: (99, 'Cannot assign requested address')... I'll just leave it] [open port 5000 on firewall] paster serve development.ini [firefox-url:5000] Firefox can't establish a connection to the server Doing these steps locally works as expected. This is just a test to see if I can get it to work at all, which I can't. If I get it to work, then is the task of getting it to work with apache. My madness is that I'd like to play around a little developing and deploying before diving into a full-fledged project. So far: self, I am dissapoint.

    Read the article

  • Detecting Xml namespace fast

    - by Anna Tjsoken
    Hello there, This may be a very trivial problem I'm trying to solve, but I'm sure there's a better way of doing it. So please go easy on me. I have a bunch of XSD files that are internal to our application, we have about 20-30 Xml files that implement datasets based off those XSDs. Some Xml files are small (<100Kb), others are about 3-4Mb with a few being over 10Mb. I need to find a way of working out what namespace these Xml files are in order to provide (something like) intellisense based off the XSD. The implementation of this is not an issue - another developer has written the code for this. But I'm not sure the best (and fastest!) way of detecting the namespace is without the use of XmlDocument (which does a full parse). I'm using C# 3.5 and the documents come through as a Stream (some are remote files). All the files are *.xml (I can detect if it was extension based) but unfortunately the Xml namespace is the only way. Right now I've tried XmlDocument but I've found it to be innefficient and slow as the larger documents are awaiting to be parsed (even the 100Kb docs). public string GetNamespaceForDocument(Stream document); Something like the above is my method signature - overloads include string for "content". Would a RegEx (compiled) pattern be good? How does Visual Studio manage this so efficiently? Another college has told me to find a fast Xml parser in C/C++, parse the content and have a stub that gives back the namespace as its slower in .NET, is this a good idea?

    Read the article

  • Creating cookieless application on development machine with asp.net

    - by zaladane
    I tried posting this on ServerFault with no luck so i am trying here. I am thinking about setting up a new domain to host static content on my website and have it cookieless just like Stackoverflow with their static domain. So before going ahead and buying the domain and setting it up I wanted to test it on my developement machine first under localhost (I have to mention that i am planning on having IIS running on my new domain for the static files). I therefore created a new application under IIS and disabled session state and forms authentication. When my main application needs resources like css, images and js , I use the path to the "static" application where they are hosted. The problem is that when I look at the request and the response for the requested files, they still have the session_id cookie defined as well as the asp.net authentication cookie. Is it at all possible to accomplish what i am trying to do on a development machine or do i have to just go ahead and purchase the new domain which hopefully with make things right? I tried to read about cookieless domain but can't figure out what i might be missing.

    Read the article

  • How do I get uri of HTTP packet with winpcap?

    - by Gtker
    Based on this article I can get all incoming packets. /* Callback function invoked by libpcap for every incoming packet */ void packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { struct tm *ltime; char timestr[16]; ip_header *ih; udp_header *uh; u_int ip_len; u_short sport,dport; time_t local_tv_sec; /* convert the timestamp to readable format */ local_tv_sec = header->ts.tv_sec; ltime=localtime(&local_tv_sec); strftime( timestr, sizeof timestr, "%H:%M:%S", ltime); /* print timestamp and length of the packet */ printf("%s.%.6d len:%d ", timestr, header->ts.tv_usec, header->len); /* retireve the position of the ip header */ ih = (ip_header *) (pkt_data + 14); //length of ethernet header /* retireve the position of the udp header */ ip_len = (ih->ver_ihl & 0xf) * 4; uh = (udp_header *) ((u_char*)ih + ip_len); /* convert from network byte order to host byte order */ sport = ntohs( uh->sport ); dport = ntohs( uh->dport ); /* print ip addresses and udp ports */ printf("%d.%d.%d.%d.%d -> %d.%d.%d.%d.%d\n", ih->saddr.byte1, ih->saddr.byte2, ih->saddr.byte3, ih->saddr.byte4, sport, ih->daddr.byte1, ih->daddr.byte2, ih->daddr.byte3, ih->daddr.byte4, dport); } But how do I extract URI information in packet_handler?

    Read the article

  • Executing bat file and returning the prompt

    - by Lieven Cardoen
    I have a problem with cruisecontrol where an ant scripts executes a bat file that doesn't give me the prompt back. As a result, the project in cruisecontrol keeps on bulding forever until I restart cruisecontrol. How can I resolve this? It's a startup.bat from wowza (Streaming Server) that I'm executing: @echo off call setenv.bat if not %WMSENVOK% == "true" goto end set _WINDOWNAME="Wowza Media Server 2" set _EXESERVER= if "%1"=="newwindow" ( set _EXESERVER=start %_WINDOWNAME% shift ) set CLASSPATH="%WMSAPP_HOME%\bin\wms-bootstrap.jar" rem cacls jmxremote.password /P username:R rem cacls jmxremote.access /P username:R rem NOTE: Here you can configure the JVM's built in JMX interface. rem See the "Server Management Console and Monitoring" chapter rem of the "User's Guide" for more information on how to configure the rem remote JMX interface in the [install-dir]/conf/Server.xml file. set JMXOPTIONS=-Dcom.sun.management.jmxremote=true rem set JMXOPTIONS=%JMXOPTIONS% -Djava.rmi.server.hostname=192.168.1.7 rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.port=1099 rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.authenticate=false rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.ssl=false rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.password.file= "%WMSCONFIG_HOME%/conf/jmxremote.password" rem set JMXOPTIONS=%JMXOPTIONS% -Dcom.sun.management.jmxremote.access.file= "%WMSCONFIG_HOME%/conf/jmxremote.access" rem log interceptor com.wowza.wms.logging.LogNotify - see Javadocs for ILogNotify %_EXESERVER% "%_EXECJAVA%" %JAVA_OPTS% %JMXOPTIONS% -Dcom.wowza.wms.AppHome="%WMSAPP_HOME%" -Dcom.wowza.wms.ConfigURL="%WMSCONFIG_URL%" -Dcom.wowza.wms.ConfigHome="%WMSCONFIG_HOME%" -cp %CLASSPATH% com.wowza.wms.bootstrap.Bootstrap start :end

    Read the article

  • forkpty - socket

    - by Alexxx
    Hi, I'm trying to develop a simple "telnet/server" daemon which have to run a program on a new socket connection. This part working fine. But I have to associate my new process to a pty, because this process have some terminal capabilities (like a readline). The code I've developped is (where socketfd is the new socket file descriptor for the new input connection) : int masterfd, pid; const char *prgName = "..."; char *arguments[10] = ....; if ((pid = forkpty(&masterfd, NULL, NULL, NULL)) < 0) perror("FORK"); else if (pid) return pid; else { close(STDOUT_FILENO); dup2(socketfd, STDOUT_FILENO); close(STDIN_FILENO); dup2(socketfd, STDIN_FILENO); close(STDERR_FILENO); dup2(socketfd, STDERR_FILENO); if (execvp(prgName, arguments) < 0) { perror("execvp"); exit(2); } } With that code, the stdin / stdout / stderr file descriptor of my "prgName" are associated to the socket (when looking with ls -la /proc/PID/fd), and so, the terminal capabilities of this process doesn't work. A test with a connection via ssh/sshd on the remote device, and executing "localy" (under the ssh connection) prgName, show that the stdin/stdout/stderr fd of this process "prgName" are associated to a pty (and so the terminal capabilities of this process are working fine). What I am doing wrong? How to associate my socketfd with the pty (created by forkpty) ? Thank Alex

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • MYSQL Data -> PHP arrays -> Javascript (or Jquery) How can I pass the data with JSON ?

    - by Alex
    I am starting with a new site (it's my first one) and I am getting big troubles ! I wrote this code <?php include("misc.inc"); $cxn=mysqli_connect($host,$user,$password,$database) or die("couldn't connect to server"); $query="SELECT DISTINCT country FROM stamps"; $result=mysqli_query($cxn,$query) or die ("couldn't execute query"); $numberOfRows=mysqli_num_rows($result); for ($i=0;$i<$numberOfRows;$i++){ $row=mysqli_fetch_assoc($result); extract($row); $a=json_encode($row); $a=$a.","; echo $a; } ?> and the output is as follows : {"country":"liechtenstein"},{"country":"romania"},{"country":"jugoslavia"},{"country":"polonia"}, which should be a correct JSON outout ... How can I get it now in Jquery ? I tried with $.getJSON but I am not able to fuse it properly. I don't want yet to pass the data to a DIV or something similar in HTML. Alex

    Read the article

  • Twitter Typeahead only shows only 5 results

    - by user3685388
    I'm using the Twitter Typeahead version 0.10.2 autocomplete but I'm only receiving 5 results from my JSON result set. I can have 20 or more results but only 5 are shown. What am I doing wrong? var engine = new Bloodhound({ name: "blackboard-names", prefetch: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false } }, remote: { url: "../CFC/Login.cfc?method=Search&returnformat=json&term=%QUERY", ajax: { contentType: "json", cache: false }, }, datumTokenizer: Bloodhound.tokenizers.obj.whitespace('value'), queryTokenizer: Bloodhound.tokenizers.whitespace }); var promise = engine.initialize(); promise .done(function() { console.log("done"); }) .fail(function() { console.log("fail"); }); $("#Impersonate").typeahead({ minLength: 2, highlight: true}, { name: "blackboard-names", displayKey: 'value', source: engine.ttAdapter() }).bind("typeahead:selected", function(obj, datum, name) { console.log(obj, datum, name); alert(datum.id); }); Data: [ { "id": "1", "value": "Adams, Abigail", "tokens": [ "Adams", "A", "Ad", "Ada", "Abigail", "A", "Ab", "Abi" ] }, { "id": "2", "value": "Adams, Alan", "tokens": [ "Adams", "A", "Ad", "Ada", "Alan", "A", "Al", "Ala" ] }, { "id": "3", "value": "Adams, Alison", "tokens": [ "Adams", "A", "Ad", "Ada", "Alison", "A", "Al", "Ali" ] }, { "id": "4", "value": "Adams, Amber", "tokens": [ "Adams", "A", "Ad", "Ada", "Amber", "A", "Am", "Amb" ] }, { "id": "5", "value": "Adams, Amelia", "tokens": [ "Adams", "A", "Ad", "Ada", "Amelia", "A", "Am", "Ame" ] }, { "id": "6", "value": "Adams, Arik", "tokens": [ "Adams", "A", "Ad", "Ada", "Arik", "A", "Ar", "Ari" ] }, { "id": "7", "value": "Adams, Ashele", "tokens": [ "Adams", "A", "Ad", "Ada", "Ashele", "A", "As", "Ash" ] }, { "id": "8", "value": "Adams, Brady", "tokens": [ "Adams", "A", "Ad", "Ada", "Brady", "B", "Br", "Bra" ] }, { "id": "9", "value": "Adams, Brandon", "tokens": [ "Adams", "A", "Ad", "Ada", "Brandon", "B", "Br", "Bra" ] } ]

    Read the article

  • Header Setup in SOAP with ASP.NET 3.5 WCF

    - by Adam
    I'm pretty new to SOAP so go easy on me. I'm trying to setup a SOAP service that accepts the following header format: <soap:Header> <wsse:Security> <wsse:UsernameToken wsu:Id='SecurityToken-securityToken'> <wsse:Username>Username</wsse:Username> <wsse:Password>Password</wsse:Password> <wsu:Created>Timestamp</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> The application I'm incorporating this service into is an ASP.NET 3.5 web application and I've already setup a SOAP endpoint using WCF. I've setup a basic service to make sure the WCF works and it works fine (disregarding the header). I heard that the above format follows WS-Security so I added WSHttpBinding in the web.config: <service name="Nexternal.Service.XMLTools.VNService" behaviorConfiguration="VNServiceBehavior"> <!--The first endpoint would be picked up from the confirg this shows how the config can be overriden with the service host--> <endpoint address="" binding="wsHttpBinding" contract="Nexternal.Service.XMLTools.IVNService"/> </service> I downloaded a test harness (soapUI) and pasted in a test message with the above header and it came back with a 400 Bad Request error. ...for what it's worth, I'm running Visual Studio 2008 using IIS7. I feel like I'm going in circles so any help would be awesome. Thanks in advance.

    Read the article

< Previous Page | 694 695 696 697 698 699 700 701 702 703 704 705  | Next Page >