Search Results

Search found 33406 results on 1337 pages for 'client library'.

Page 583/1337 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • WCF Mono - BasicHttpBinding with SSL

    - by TheNextman
    I'm trying to port an existing WCF client application to run on Linux under Mono. Right now I'm testing everything out, figuring out what works on Mono and what doesn't. The client makes a super simple call over basicHttpBinding. It works great, until I enable SSL (that is, specify BasicHttpSecurityMode.Transport in the binding). Running on .NET in Windows, it works great Running on Mono on Ubuntu 9.10 / Mono 2.6 I get the following error: Exception in async operation: System.Net.WebException: Error getting response stream (Write: The authentication or decryption has failed.): SendFailure --- System.IO.IOException: The authentication or decryption has failed. --- Mono.Security.Protocol.Tls.TlsException: Invalid certificate received from server. Error code: 0xffffffff800b010a I've read the Mono security FAQ here: http://www.mono-project.com/FAQ:_Security; however the SSL certificate on the server is from a root CA (a purchased certificate) - issued by Equifax Secure Certificate Authority. I ran the TlsTest tool on the Ubuntu install against the .svc URL and there are no problems/errors. Also I can hit the service fine in Firefox (no security warnings). What am I missing? Thanks in advance, Richard

    Read the article

  • Batch program not running correctly in Windows 7

    - by Jennifer Heidelberger
    I am currently trying to figure out how to get a batch file to run correctly in Windows 7. I have looked on the Internet and have not had much success in finding any useful information on the issue I am encountering. BATCH FILE – The batch file is to open a window to allow students to test on a TDSM server created by ETS eCBT – The test is a CLEP Exam. The batch file is to open the workstation for students to use and it looks like it loads but the Welcome/Login Screen never appears as it should. WSK_LOAD.BAT @echo off rem-------------------------------------------------------------------- rem !!! DO NOT REMOVE OR MODIFY THIS FILE !!! rem !!! THIS FILE IS USED BY THE eCBT SYSTEM !!! rem-------------------------------------------------------------------- SET ECBT_DEFAULT_SERVER_NAME=WR-TESTING1 SET ECBT_BATCH_HOME=C:\ETSBATCH SET ECBT_HOME=\\WR-TESTING1\tdms set ECBT_LOGFILE=%ECBT_BATCH_HOME%\wsk.log SET ECBT_CLIENT_VERSION=4.0 rem---------------------------------------------------------------------- if exist %ECBT_HOME%\client\bin\wks.bat goto avail echo Cannot access %ECBT_HOME%! echo Attempting to open the share … echo If you see the share window, please close it to proceed … rem------------------------------------------------------------------------ :avail %ECBT_HOME%\client\bin\wks.bat I have tried everything I can think of: Run as Administrator, Moved files to run from the HD, made sure all files and folders associated with the program were shared with users and computers, had Windows 7 run compatibility which says it does not contain an .exe file to run, and re-wrote the file. I know it is connecting to the TDMS server as I can see it on the server. The only thing it does not do is bring up the window which is necessary to login to the testing server. The window opens like it should but does not produce the login boxes. Any and all help is appreciated, Jennifer

    Read the article

  • g++ C++0x enum class Compiler Warnings

    - by Travis G
    I've been refactoring my horrible mess of C++ type-safe psuedo-enums to the new C++0x type-safe enums because they're way more readable. Anyway, I use them in exported classes, so I explicitly mark them to be exported: enum class __attribute__((visibility("default"))) MyEnum : unsigned int { One = 1, Two = 2 }; Compiling this with g++ yields the following warning: type attributes ignored after type is already defined This seems very strange, since, as far as I know, that warning is meant to prevent actual mistakes like: class __attribute__((visibility("default"))) MyClass { }; class __attribute__((visibility("hidden"))) MyClass; Of course, I'm clearly not doing that, since I have only marked the visibility attributes at the definition of the enum class and I'm not re-defining or declaring it anywhere else (I can duplicate this error with a single file). Ultimately, I can't make this bit of code actually cause a problem, save for the fact that, if I change a value and re-compile the consumer without re-compiling the shared library, the consumer passes the new values and the shared library has no idea what to do with them (although I wouldn't expect that to work in the first place). Am I being way too pedantic? Can this be safely ignored? I suspect so, but at the same time, having this error prevents me from compiling with Werror, which makes me uncomfortable. I would really like to see this problem go away.

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • Find Port Number and Domain Name to connect to Hive Table

    - by user1419563
    I am new to Hive, MapReduce and Hadoop. I am using Putty to connect to hive table and access records in the tables. So what I did is- I opened Putty and in the host name I typed- ares-ingest.vip.host.com and then I click Open. And then I entered my username and password and then few commands to get to Hive sql. Below is the list what I did $ bash bash-3.00$ hive Hive history file=/tmp/rjamal/hive_job_log_rjamal_201207010451_1212680168.txt hive> set mapred.job.queue.name=hdmi-technology; hive> select * from table LIMIT 1; So my question is- I was trying to connect to Hive Tables using Squirrel SQL Client, so in that my Connection URL is- jdbc:hive://ares-ingest.vip.host.com:10000/default. So whenever I try to connect with these attributes, I always get Hive: Could not establish connection to ares-ingest.vip.host.com:10000/default: java.net.ConnectException: Connection timed out: connect. It might be possible I am using wrong port number or domain name here. Is there any way from the command prompt I can find out these two things, like what Domain Name and Port Number(where Hive server is running) should I use to connect to Hive table from Squirrel SQL Client. As I know host and port are determined by where the hive server is running

    Read the article

  • Looking for a communication framework for delphi

    - by Ryan
    I am looking for a communication framework for delphi, we know there are so many communication frameworks for other languages , wcf, ecf and so forth, but i have nerver found the one for delphi till now , anybody who knows about it can give me an ider? There are some requirements i need ,as follows: Building an application(server or client) without caring how to do communications with each other between two endpoints. Imagine that we use mailbox for exchanging messages,it seems that the communication is transparent. Supports communication protocol extending. We often need to exchange the messages between 2 devices, but the communication protocol is not a public or general one, so we need to extend the framework,to implement a communication protocol for receiving or sending a message completely. Supports asynchronous and synchronous communication Supports transmission protocol extending. The transmission protocol can implemented by winsocket, pipes, com, windows message, mailslot and so forth. In client application, we can write code snips like follows: var server: TDelphiCommunicationServer; session : ICommunicationSession; request, response: IMessage; begin session := server.CreateSession('IP', Port); request := TLoginRequest.Create; session.SynSendMessage(request); session.WaitForMessage(response, INFINITE); ....... end; In above code snips , TLoginRequest has implemented the message interface.

    Read the article

  • codeigniter & cjax framework, fatal error class 'CI_Controller' not found

    - by Martin
    I'm having this weird error with Codeigniter 2.1.3 and latest cjax for codeigniter. Weird thing is, when I download the latest codeigniter, and latest cjax framework for codeitniger and copy to my friends server, and call: domain.com/ajax.php?test/test2 to show the test ajax examples ... it works like a breeze, but when I do this on my server, I get server error (even tho, we both have same php version and such). Server then throws in error log file this error: PHP Fatal error: Class 'CI_Controller' not found in /hosting/www/domain.com/www/application/response/test.php on line 3 Now, I've read thru stackoverflow with people having this problem and solving by changing the construct and calling CI_Controller instead of Controller. But I already do that ... - I mean it's in the basic example that is suppose to work without touching the code, and it does, just not on my domain for some crappy reason. Ajax.php from cjax framework for codeingter should load controller from folder response, named test and call function test2, which looks like this (the actual file named test.php): class Test extends CI_Controller { function __construct() { parent::__construct(); } /** * * ajax.php?test/test/a/b/c * * @param unknown_type $a * @param unknown_type $b * @param unknown_type $c */ function test($a = null,$b = null, $c = null) { $this->load->view('test', array('data' => $a .' '.$b.' '.$c)); } /** * ajax.php?test/test2 * * Here we are testing out the javascript library. * * Note: the library it is not meant to be included in ajax controllers - but in front-controllers, * it is being used here for the sake of simplicity in testing. */ function test2() { $ajax = ajax(); $ajax->update('response','Cjax Works'); $ajax->append('#response','<br /><br />version: '.$ajax->version); $ajax->success('Cjax was successfully installed.', 5); //see application/views/test2.php $this->load->view('test2'); } I was hoping someone could bring some light into this problem - or maybe someone has already experienced it? Thanks for your time! Mart

    Read the article

  • WCF - Increase ReaderQuoatas on REST service

    - by Christo Fur
    I have a WCF REST Service which accepts a JSON string One of the parameters is a large string of numbers This causes the following error - which is visible by tracing and using SVC Trace Viewer There was an error deserializing the object of type CarConfiguration. The maximum string content length quota (8192) has been exceeded while reading XML data. This quota may be increased by changing the MaxStringContentLength property on the XmlDictionaryReaderQuotas object used when creating the XML reader. Now I've read all sorts of articles advising how to rectify this All of them recommend increasing various config settings on the server and client e.g. http://stackoverflow.com/questions/65452/error-serializing-string-in-webservice-call http://bloggingabout.net/blogs/ramon/archive/2008/08/20/wcf-and-large-messages.aspx http://social.msdn.microsoft.com/Forums/en/wcf/thread/f570823a-8581-45ba-8b0b-ab0c7d7fcae1 So my config file looks like this <webHttpBinding> <binding name="webBinding" maxBufferSize="5242880" maxReceivedMessageSize="5242880" > <readerQuotas maxDepth="5242880" maxStringContentLength="5242880" maxArrayLength="5242880" maxBytesPerRead="5242880" maxNameTableCharCount="5242880"/> </binding> </webHttpBinding> ... ... ... <endpoint address="/" binding="webHttpBinding" bindingConfiguration="webBinding" My problem is that I can change this on the server, but there are no WCF config settings on the client as its a REST service and I'm just making a http request using the WebClient object any ideas?

    Read the article

  • When to use custom exceptions vs. existing exceptions vs. generic exceptions

    - by Ryan Elkins
    I'm trying to figure out what the correct form of exceptions to throw would be for a library I am writing. One example of what I need to handle is logging a user in to a station. They do this by scanning a badge. Possible things that could go wrong include: Their badge is deactivated They don't have permission to work at this station The badge scanned does not exist in the system They are already logged in to another station elsewhere The database is down Internal DB error (happens sometimes if the badge didn't get set up correctly) An application using this library will have to handle these exceptions one way or another. It's possible they may decide to just say "Error" or they may want to give the user more useful information. What's the best practice in this situation? Create a custom exception for each possibility? Use existing exceptions? Use Exception and pass in the reason (throw new Exception("Badge is deactivated.");)? I'm thinking it's some sort of mix of the first two, using existing exceptions where applicable, and creating new ones where needed (and grouping exceptions where it makes sense).

    Read the article

  • Asynchronous pages in the ASP.NET framework - where are the other threads and how is it reattached?

    - by rkrauter
    Sorry for this dumb question on Asynchronous operations. This is how I understand it. IIS has a limited set of worker threads waiting for requests. If one request is a long running operation, it will block that thread. This leads to fewer threads to serve requests. Way to fix this - use asynchronous pages. When a request comes in, the main worker thread is freed and this other thread is created in some other place. The main thread is thus able to serve other requests. When the request completes on this other thread, another thread is picked from the main thread pool and the response is sent back to the client. 1) Where are these other threads located? 2) IF ASP.NET likes creating new threads, why not increase the number of threads in the main worker pool - they are all running on the same machine anyway? 3) If the main thread hands off a request to this other thread, why does the request not get disconnected? It magically hands off the request to another worker thread somewhere else and when the long running process completes, it picks a thread from the main worker pool and sends response to the client. I am amazed...but how does that work?

    Read the article

  • Rails.cache throws "marshal dump" error when changed from memory store to memcached store

    - by gsmendoza
    If I set this in my environment config.action_controller.cache_store = :mem_cache_store ActionController::Base.cache_store will use a memcached store but Rails.cache will use a memory store instead: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb6eb4bbc @data=<MemCache: 1 servers, ns: nil, ro: false>> >> Rails.cache => #<ActiveSupport::Cache::MemoryStore:0xb78b5e54 @data={}> In my app, I use Rails.cache.fetch(key){ object } to cache objects inside my helpers. All this time, I assumed that Rails.cache uses the memcached store so I'm surprised that it uses memory store. If I change the cache_store setting in my environment to config.cache_store = :mem_cache_store both ActionController::Base.cache_store and Rails.cache will now use the same memory store, which is what I expect: $ ./script/console >> ActionController::Base.cache_store => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> >> Rails.cache => #<ActiveSupport::Cache::MemCacheStore:0xb7b8e928 @data=<MemCache: 1 servers, ns: nil, ro: false>, @middleware=#<Class:0xb7b73d44>, @thread_local_key=:active_support_cache_mem_cache_store_local_cache> However, when I run the app, I get a "marshal dump" error in the line where I call Rails.cache.fetch(key){ object } no marshal_dump is defined for class Proc Extracted source (around line #1): 1: Rails.cache.fetch(fragment_cache_key(...), :expires_in => 15.minutes) { ... } vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'dump' vendor/gems/memcache-client-1.8.1/lib/memcache.rb:359:in 'set_without_newrelic_trace' What gives? Is Rails.cache meant to be a memory store? Should I call controller.cache_store.fetch in the places where I call Rails.cache.fetch?

    Read the article

  • C# average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • OpenCV Python HoughCircles error

    - by Dan
    Hi, I'm working on a program that detects circular shapes in images. I decided a Hough Transform would be the best, and I found one in the OpenCV library. The problem is that when I try to use it I get an error that I have no idea how to fix. Is OpenCV for Python not fully implemented? Is there a fix to the library I need for the program to work? Here's the code: import cv #cv.NamedWindow("camera", 1) capture = cv.CaptureFromCAM(0) while True: img = cv.QueryFrame(capture) gray = cv.CreateImage(cv.GetSize(img), 8, 1) edges = cv.CreateImage(cv.GetSize(img), 8, 1) cv.CvtColor(img, gray, cv.CV_BGR2GRAY) cv.Canny(gray, edges, 50, 200, 3) cv.Smooth(gray, gray, cv.CV_GAUSSIAN, 9, 9) storage = cv.CreateMat(1, 2, cv.CV_32FC3) #This is the line that throws the error cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100) #cv.ShowImage("camera", img) if cv.WaitKey(10) == 27: break And here is the error I'm getting: OpenCV Error: Null pinter () in unknown function, file ..\..\..\..\ocv\openc\src\cxcore\cxdatastructs.cpp, line 408 Traceback (most recent call last): File "ellipse-detect-webcam.py", line 20, in cv.HoughCircles(edges, storage, cv.CV_HOUGH_GRADIENT, 2, gray.height/4, 200, 100) cv.error Thanks in advance for the help.

    Read the article

  • Mercurial Tagging/Branching Strategy

    - by Tony Trozzo
    My current project is broken down into 3 parts: Website, Desktop Client, and a Plug-in for a third party program. We had started out originally with Subversion for our source control but decided to try Mercurial after reading Joel Spolsky's final post. Considering we haven't really used the majority of svn's potential before, we figured starting fresh with some basic ideas of how source control worked would make this transition easy. However, after setting up our initial repository, we're lost as to how tagging and branching should work on a project like this. Essentially, we're working on all 3 of these parts at the same time. We want a release to be a combination of the 3 parts. Currently we're working in one repository. For the Plug-in part, we have the first iteration finished which we've been referring to as Plug-In v0.1. For the first official build of the other two parts, we'd also like to refer to them as Website v0.1 and Desktop Client v0.1. When all three parts are at v0.1, we'd like to have a Full Project v0.1. Our problem is we're not sure how to manage all of this in the Hg repository. Would the best way to handle this be to create 3 separate repositories for the 3 stable versions and then 3 more repositories for the current developments? Currently we have this all in one repository. Should we do this in branches (are branches any different from cloning repositories?) and tags? Any help is greatly appreciated.

    Read the article

  • Crawler do not create custom crawled properties

    - by user173739
    These days i have faced with very strange problem. I have development environment with MOSS 2007 SP 2 and WS 2008, i have search configured and everything works great. I have started to configuring staging environment (MOSS 2007 SP2 with June CU) and create new farm and new SSP. I have deployed my changes with package (wsp) and manually create site collections, sub webs, pages and so on. When fill crawl finishes, i see in Crawl log that all my pages have been successfully crawled and when i use some test tools to query search, my pages have been found. In crawl log there is few errors like http://mysite/sites/de/pages "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly..", but all pages in this Page library were indexed. The problem is that i use custom managed properties (mapped to custom crawled properties) in search queries, but crawler didn't create crawled properties for all my new site columns. For example for site column IsAccent the crawler didn't create cralwed property ows_isAccesnt. I'm sure that i have created pages for specific content type and all my crawl categories have checked "Automatically discover new properties when a crawl takes place ". In site settings - Searchable columns i haven't got any column selected as Nocrowl. I tried to export my managed and crawled properties from dev environment to stage evironment but all my managed properties were empty, after that i recreated SSP...the result was the same... I checked specific page with tools like Sharepoint Manager 2007 and U2U Caml Query Builder 2007 that content type is correct, and i can see values of my custom site collumns.... Using U2U Caml Query Builder 2007 agains some Page library in Result tab i can see ows_IsAccent (my site collumn is IsAccent) and others site columns, but i can't find them in Crawled properties. Any idias?

    Read the article

  • TimoutException occurs over a network but not locally

    - by Gibsnag
    I have a program with three WCF services and when I run them locally (i.e: Server and Clients are all on localhost) everything works. However when I test them across a network I get a TimoutException on two services but not the other. I've disabled the firewalls on all the machines involved in the test. I can both ping the server and access the wsdl "You have created a service" webpage from the client The service that works uses a BasicHttpBinding with streaming and the two which don't work use WSDualHttpBinding. The Services that use WSDualHttpBinding both have CallbackContracts. I apologise for the vagueness of this question but I'm not really sure what code to include or where to even start looking for the solution to this. Non-working bindings: public static Binding CreateHTTPBinding() { var binding = new WSDualHttpBinding(); binding.MessageEncoding = WSMessageEncoding.Mtom; binding.MaxBufferPoolSize = 2147483647; binding.MaxReceivedMessageSize = 2147483647; binding.Security.Mode = WSDualHttpSecurityMode.None; return binding; } Exception Stack Trace: Unhandled Exception: System.TimeoutException: The open operation did not complete within the allotted timeout of 00:01:00. The time allotted to this operation may have been a portion of a longer timeout. Server stack trace: at System.ServiceModel.Channels.ReliableRequestor.ThrowTimeoutException() at System.ServiceModel.Channels.ReliableRequestor.Request(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableSession.Open(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableDuplexSessionChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOpenOnce.System.ServiceModel.Channels.ServiceChannel.ICallOnce.Call(ServiceChannel channel, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.CallOnceManager.CallOnce(TimeSpan timeout, CallOnceManager cascade) at System.ServiceModel.Channels.ServiceChannel.EnsureOpened(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at IDemeService.Register() at DemeServiceClient.Register() at DemeClient.Client.Start() at DemeClient.Program.Main(String[] args)

    Read the article

  • Can't ssh to ec2 permission denied (publickey)

    - by Chris Barnes
    I have existing instances running and I can connect to them fine. Even if I start a new instance from one of my saved ami's I can connect to it fine but any new public or community ami (I've tried 2 offical Ubuntu ami's and 1 Fedora quickstart ami) I get permission denied (publickey). The permissions are good on my key file. I've also tried creating a new keyfile. My ec2 firewall rules are good, I've also tried creating a new group. This is the error I'm getting. ssh -v -i ec2-keypair [email protected] OpenSSH_5.2p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/chris/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file ec2-keypair type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/chris/.ssh/known_hosts:13 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-keypair debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • How to write an asmx web service without using app_code directory?

    - by JL
    Excuse the title, but it's best I just explain the problem. I have 2 projects in my solution A Class Library A Web Application, which consists of a web service (asmx). the web service has code sitting in the app_code folder, with a file [webservicename].cs Inside the webservice code behind class, I have a web method here is a sample example (its simplified): [WebMethod] public EnumTaskExportState ProcessTask() { var tm = new UploadTaskManager(); return tm.ProcessTask(); } Now at design time, in visual studio (2010 or 2008), when I right click on UploadTaskMananger, and then select "Go to definition". I get taken to AppData\Temp[some folder structure]...etc.... and it displays the public class definition. Instead I would like to have complete integration, so that I get taken directly to the actual class in the class library project. My guess is, this is happening because I am using the app_code route, and not a compiled file for the web service class. But I don't know any other way to do this. How can I fix this? Possibly do away with the need for the app_code directory?

    Read the article

  • Gmail: How to send an email programmatically

    - by Clint
    Possible Exact Duplicate: Sending Email in C#.NET Through Gmail Hi, I'm trying to send an email using gmail: I tried various examples that I found on this site and other sites but I always get the same error: Unable to connect to the remote server -- System.net.Sockets.SocketException: No connection could be made because the target actively refused it 209.85.147.109:587 public static void Attempt1() { var client = new SmtpClient("smtp.gmail.com", 587) { Credentials = new NetworkCredential("[email protected]", "MyPassWord"), EnableSsl = true }; client.Send("[email protected]", "[email protected]", "test", "testbody"); } Any ideas? UPDATE More details. Maybe I should say what other attempts I made that gave me the same error: (Note when i didn't specify a port it tryed port 25) public static void Attempt2() { var fromAddress = new MailAddress("[email protected]", "From Name"); var toAddress = new MailAddress("[email protected]", "To Name"); const string fromPassword = "pass"; const string subject = "Subject"; const string body = "Body"; var smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, UseDefaultCredentials = false, Credentials = new NetworkCredential(fromAddress.Address, fromPassword) }; using (var message = new MailMessage(fromAddress, toAddress) { Subject = subject, Body = body } ) { smtp.Send(message); } } public static void Attempt3() { MailMessage mail = new MailMessage(); mail.To.Add("[email protected]"); mail.From = new MailAddress("[email protected]"); mail.Subject = "Email using Gmail"; string Body = "Hi, this mail is to test sending mail" + "using Gmail in ASP.NET"; mail.Body = Body; mail.IsBodyHtml = true; SmtpClient smtp = new SmtpClient(); smtp.Host = "smtp.gmail.com"; smtp.Credentials = new System.Net.NetworkCredential ("[email protected]", "pass"); smtp.EnableSsl = true; smtp.Send(mail); }

    Read the article

  • HttpUtility does not exist in the current context

    - by Shaihi
    I get this error when compiling a C# application. Looks like a trivial error, but I can't get around it. My setup is Windows 7 64 bit. Visual-Studio 2010 C# express B2Rel. I added a reference to System.Web.dll located at C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.0, but it has a yellow exclamation symbol and I still get the above error. I also have the using System.Web declaration. What am I doing wrong? Update: After getting the prompt answer pointing me at the root cause, I searched a bit in Google to where it states that System.Web.dll is for the full framework. I did not find such a reference. For newbies like me, this blog summarizes the difference between the frameworks (client and full) nicely. I could not find a spot that says whether a certain Dll is supported in the client framework or not. I guess the exclamation mark in Visual Studio should be the first signal...

    Read the article

  • Trouble converting an MP3 file to a WAV file using Naudio

    - by WebDevHobo
    Naudio Library: http://naudio.codeplex.com/ I'm trying to convert an MP3 file to a WAV file, but I've run in to a small error. I know what's going wrong, but I don't really know how to go about fixing it. Here's the piece of code I'm running: private void button1_Click(object sender, EventArgs e) { using(Mp3FileReader reader = new Mp3FileReader(@"path\to\MP3")) { using(WaveFileWriter writer = new WaveFileWriter(@"C:\test.wav", new WaveFormat())) { int counter = 0; while(reader.Read(test, counter, test.Length + counter) != 0) { writer.WriteData(test, counter, test.Length + counter); counter += 512; } } } } reader.Read() goes into the Mp3FileReader class, and the method looks like this: public override int Read(byte[] sampleBuffer, int offset, int numBytes) { if (numBytes % waveFormat.BlockAlign != 0) //throw new ApplicationException("Must read complete blocks"); numBytes -= (numBytes % waveFormat.BlockAlign); return mp3Stream.Read(sampleBuffer, offset, numBytes); } mp3Stream is an object of the Stream class. The problem is: I'm getting an ArgumentException. MSDN says that this is because the sum of offset and numBytes is greater than the length of sampleBuffer. Documentation: http://msdn.microsoft.com/en-us/library/system.io.stream.read.aspx This happens because I increase the counter every time, but the size of the byte array test remains the same. What I've been wondering is: do I need to increase the size of the array dynamically, or do I need to find out the needed size at the beginning and set it right away? And also, instead of 512, the method in Mp3FileReader returns 365 the first time. Which is the size of a whole block. But I'm writing the full 512. I'm basically just using the read to check if I'm not at the end of the file yet. Do I need to catch the return value and do something with that, or am I good here?

    Read the article

  • Problem trying to install PyCurl on Mac Snow Leopard

    - by Ldn
    Hi, My app needs to use PyCurl, so i tried to install it on my Mac but i found a lot of problems and error :( Requirement: First of all i've to say that the version of Python working on my Mac is 32 bit based, because i need to use WxPython, that needs 32 bit Python. For doing this i used: defaults write com.apple.versioner.python Prefer-32-Bit -bool yes To install PyCurl i used: sudo env ARCHFLAGS="-arch x86_64" easy_install setuptools pycurl And the terminal returned: Best match: setuptools 0.6c11 Processing setuptools-0.6c11-py2.6.egg setuptools 0.6c11 is already the active version in easy-install.pth Installing easy_install script to /usr/local/bin Installing easy_install-2.6 script to /usr/local/bin Using /Library/Python/2.6/site-packages/setuptools-0.6c11-py2.6.egg Processing dependencies for setuptools Finished processing dependencies for setuptools Searching for pycurl Best match: pycurl 7.16.2.1 Processing pycurl-7.16.2.1-py2.6-macosx-10.6-universal.egg pycurl 7.16.2.1 is already the active version in easy-install.pth Using /Library/Python/2.6/site-packages/pycurl-7.16.2.1-py2.6-macosx-10.6-universal.egg Processing dependencies for pycurl Finished processing dependencies for pycurl so i thought that pycurl was correctly installed and working. But... But when i started my app, python return me an error: python /Users/lorenzodenobili/Desktop/Python/AGGIORNATORE_PY/Dropbox/wxPython/test.py Traceback (most recent call last): File "/Users/lorenzodenobili/Desktop/Python/AGGIORNATORE_PY/Dropbox/wxPython/test.py", line 20, in <module> import pycurl File "build/bdist.macosx-10.6-universal/egg/pycurl.py", line 7, in <module> File "build/bdist.macosx-10.6-universal/egg/pycurl.py", line 6, in __bootstrap__ ImportError: dlopen(/Users/lorenzodenobili/.python-eggs/pycurl-7.16.2.1-py2.6-macosx-10.6-universal.egg-tmp/pycurl.so, 2): no suitable image found. Did find: /Users/lorenzodenobili/.python-eggs/pycurl-7.16.2.1-py2.6-macosx-10.6-universal.egg-tmp/pycurl.so: mach-o, but wrong architecture Yep, there is something quite big that goes wrong. I really don't have any idea on how to solve this error, so i really need your help! thank you so much!!

    Read the article

  • WCF fault logging and SQL Exception 4060 error.

    - by Bill
    I have been attempting to compile/run a sample WCF application from Juval Lowy's website (author of Programming WCF Services & founder of IDesign) for several days. The example app utilizes Juval's ServiceModelEx library which logs faults/errors to a "WCFLogbook" SQL database. Unfortunately, when the sample app faults, I get the following error: SQL Exception 4060: "Cannot open database \"WCFLogbook\" requested by the login. The login failed.\r\nLogin failed for user 'Bill-PC\Bill'." I confirmed that the SQL WCFLogbook database has been created and have granted all of the appropriate permissions for my (Bill-PC\Bill) access to the database. Additionally port 8006 and port 1433 have been opened in the Firewall. TCP/IP has been enabled and "Allow remote connections to this server" has been checked. I am using the following endpoint within the App.Config file: <client> <endpoint name="LogbookTCP" address="net.tcp://Bill-PC:8006/LogbookManager" binding="netTcpBinding" contract="ILogbookManager" /> </client> Unfortunately SQL is a 'world' that I hadn't needed to venture into before now and I am terribly frustrated with my lack of success. Would anyone have any other suggestions on how to get this working? Have I missed anything?

    Read the article

  • General Web Programming/designing Question: ?

    - by Prasad
    hi, I have been in web programming for 2 years (Self taught - a biology researcher by profession). I designed a small wiki with needed functionalities and a scientific RTE - ofcourse lot is expected. I used mootools framework and AJAX extensively. I was always curious when ever I saw the query strings passed from URL. Long encrypted query string directly getting passed to the server. Especially Google's design is such. I think this is the start of providing a Web Service to a client - I guess. Now, my question is : is this a special, highly professional, efficient / advanced web design technique to communicate queries via the URL ? I always felt that direct URL based communication is faster. I tried my bit and could send a query through the URL directly. here is the link: http://sgwiki.sdsc.edu/getSGMPage.php?8 By this , the client can directly link to the desired page instead of searching and / or can automate. There are many possibilities. The next request: Can I be pointed to such technique of web programming? oops: I am sorry, If I have not been able to convey my request clearly. Prasad.

    Read the article

  • REST tools support for development and testing

    - by nzpcmad
    There is a similar question here but it only covers some of the issues below. We have a client who requires web services using REST. We have tons of experience using SOAP and over time have gathered together a really good set of tools for SOAP development and testing e.g. soapUI Eclipse plugins wsdl2java WSStudio By "tools" I mean a product "out of the box" that we can start using. I'm not talking about cutting code to "roll our own" using Ajax or whatever. The tool set for REST doesn't seem to be nearly as mature? What tools are out there (we use C# and Java mainly) ? Do the tools handle GET, POST, PUT, and DELETE? Is there a decent Eclipse plugin? Is there a decent client testing application like WSStudio where you point the tool to the WSDL and it generates a proxy on the fly with the appropriate methods and inputs and you simple type the data in? Are there any good package monitoring tools that allow you to look at the data? (I'm not thinking about sniffers like Wireshark here but rather things like soapUI that allow you to see the request / response) ?

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >