Search Results

Search found 43654 results on 1747 pages for 'custom method'.

Page 534/1747 | < Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >

  • Using multiple sockets, is non-blocking or blocking with select better?

    - by JPhi1618
    Lets say I have a server program that can accept connections from 10 (or more) different clients. The clients send data at random which is received by the server, but it is certain that at least one client will be sending data every update. The server cannot wait for information to arrive because it has other processing to do. Aside from using asynchronous sockets, I see two options: Make all sockets non-blocking. In a loop, call recv on each socket and allow it to fail with WSAEWOULDBLOCK if there is no data available and if I happen to get some data, then keep it. Leave the sockets as blocking. Add all sockets to a fd_set and call select(). If the return value is non-zero (which it will be most of the time), loop through all the sockets to find the appropriate number of readable sockets with FD_ISSET() and only call recv on the readable sockets. The first option will create a lot more calls to the recv function. The second method is a bigger pain from a programming perspective because of all the FD_SET and FD_ISSET looping. Which method (or another method) is preferred? Is avoiding the overhead on letting recv fail on a non-blocking socket worth the hassle of calling select()? I think I understand both methods and I have tried both with success, but I don't know if one way is considered better or optimal. Only knowledgeable replies please!

    Read the article

  • Making one of a group of similar form fields required in CakePHP

    - by Pickledegg
    I have a bunch of name/email fields in my form like this: data[Friend][0][name] data[Friend][1][name] data[Friend][2][name] etc. and data[Friend][0][email] data[Friend][1][email] data[Friend][2][email] etc. I have a custom validation rule on each one that checks to see if the corresponding field is filled in. Ie. if data[Friend][2][name] then data[Friend][2][email] MUST be filled in. FYI, heres what one of the two rules look like: My form validation rule: ( I have an email validation too but that's irrelevant here) 'name' => array( 'checkEmail' => array( 'rule' => 'hasEmail', 'message' => 'You must fill in the name field', 'last' => true ) ) My custom rule code: function hasEmail($data){ $name = array_values($data); $name = $name[0]; if(strlen($name) == 0){ return empty($this->data['Friend']['email']); } return true; } I need to make it so that one of the pairs should be filled in as a minimum. It can be any as long as the indexes correspond. I can't figure a way, as if I set the form rule to be required or allowEmpty false, it fails on ALL empty fields. How can I check for the existence of 1 pair and if present, carry on? Also, I need to strip out all of the remaining empty [Friend] fields, so my saveAll() doesn't save a load of empty rows, but I think I can handle that part using extract in my controller. The main problem is this validation. Thanks.

    Read the article

  • Compiler error when overwriting virtual methods

    - by Stefan Hubert
    Using VC71 compiler and get compiler errors, that i don't understand. Here comes the example class A { public: virtual int& myMethod() = 0; virtual const int& myMethod()const = 0; }; class B: public A { public: // generates: error C3241: 'const int &B::myMethod(void)' : this method was not introduced by 'A' virtual const int& A::myMethod() const; virtual int& A::myMethod(); }; when i switch order of both method definitions in B then I see a different compiler error: class B: public A { public: virtual const int& A::myMethod() const; // error C2556: 'const int &B::myMethod(void)' : overloaded function differs only by return type from 'int &B::myMethod(void)' // error C2373: 'B::myMethod' : redefinition; different type modifiers virtual int& A::myMethod(); }; however, if I omit the A:: stuff then i don't get any compiler error: class B: public A { public: virtual int& myMethod(); virtual const int& myMethod() const; }; So, what exactly does A:: in front of my method names and why do i see these diverse compiler errors? Any explanation welcome!

    Read the article

  • How to avoid game rendering component circular references?

    - by CodexArcanum
    I'm working on a simple game design, and I wanted to break up my game objects into more reusable components. But I'm getting stuck on how exactly to implement the design I have in mind. Here's an example: I have a Logger object, whose job is simply to store a list of messages and render them to screen. You know, logging. Originally the Logger just held the list, and the game loop rendered it's contents. Then I moved the rendering logic into the Logger.Draw() method, and now I want to move it further into a LoggerRenderer object. In effect, I want to have the game loop call RenderAll, which will then call Logger.Render, which will in turn call the LoggerRenderer.Render and finally output the text. So the Logger needs to contain a Renderer object, but the Renderer needs access to the Logger's state (the message queue) in order to render. How do I resolve that? Should I be passing in the message queue and other state information explicitly to the Render method? Or should the game loop be calling the Renderer directly and it links back to the logger, but the RenderAll method never actually sees the logger object itself? This feels kind of like Command pattern, but I'm botching it up terribly.

    Read the article

  • Why can't I initialize a class through a setter?

    - by Rob emenaker
    If I have a custom class called Tires: #import <Foundation/Foundation.h> @interface Tires : NSObject { @private NSString *brand; int size; } @property (nonatomic,copy) NSString *brand; @property int size; - (id)init; - (void)dealloc; @end ============================================= #import "Tires.h" @implementation Tires @synthesize brand, size; - (id)init { if (self = [super init]) { [self setBrand:[[NSString alloc] initWithString:@""]]; [self setSize:0]; } return self; } - (void)dealloc { [super dealloc]; [brand release]; } @end And I synthesize a setter and getter in my View Controller: #import <UIKit/UIKit.h> #import "Tires.h" @interface testViewController : UIViewController { Tires *frontLeft, *frontRight, *backleft, *backRight; } @property (nonatomic,copy) Tires *frontLeft, *frontRight, *backleft, *backRight; @end ==================================== #import "testViewController.h" @implementation testViewController @synthesize frontLeft, frontRight, backleft, backRight; - (void)viewDidLoad { [super viewDidLoad]; [self setFrontLeft:[[Tires alloc] init]]; } - (void)dealloc { [super dealloc]; } @end It dies after [self setFrontLeft:[[Tires alloc] init]] comes back. It compiles just fine and when I run the debugger it actually gets all the way through the init method on Tires, but once it comes back it just dies and the view never appears. However if I change the viewDidLoad method to: - (void)viewDidLoad { [super viewDidLoad]; frontLeft = [[Tires alloc] init]; } It works just fine. I could just ditch the setter and access the frontLeft variable directly, but I was under the impression I should use setters and getters as much as possible and logically it seems like the setFrontLeft method should work. This brings up an additional question that my coworkers keep asking in these regards (we are all new to Objective-C); why use a setter and getter at all if you are in the same class as those setters and getters.

    Read the article

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • running scala apps with java -jar

    - by paintcan
    Yo dawgs, I got some problems with the java. Check it out. sebastian@sebastian-desktop:~/scaaaaaaaaala$ java -cp /home/sebastian/.m2/repository/org/scala-lang/scala-library/2.8.0.RC3/scala-library-2.8.0.RC3.jar:target/scaaaaaaaaala-1.0.jar scaaalaaa.App Hello World! That's cool, right, but how bout this: sebastian@sebastian-desktop:~/scaaaaaaaaala$ java -cp /home/sebastian/.m2/repository/org/scala-lang/scala-library/2.8.0.RC3/scala-library-2.8.0.RC3.jar -jar target/scaaaaaaaaala-1.0.jar Exception in thread "main" java.lang.NoClassDefFoundError: scala/Application at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632) at java.lang.ClassLoader.defineClass(ClassLoader.java:616) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) at scaaalaaa.App.main(App.scala) Caused by: java.lang.ClassNotFoundException: scala.Application at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 13 more Wat the heck? Any idea why the first works and not the 2nd? How do I -jar my scala?? Thanks in advance bro.

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • ObjectContext disposed puzzle

    - by jaklucky
    Hi, I have the follwing method. public List<MyEntity> GetMyEntities(MyObjectContext objCtx) { using(MyObjectContext ctx = objCtx ?? new MyObjectContext()) { retun ctx.MyEntities.ToList(); } } The idea is, user of this method can pass in the objectcontext if they have. If not then a new objectcontext will be created. If I am passing an object context to it, then it is getting disposed after the method is done. I was expecting only "ctx" variable gets disposed. If I write a small app, to know the using and dispose mechanism. It is acting differently. class TestClass : IDisposable { public int Number { get; set; } public string Str { get; set; } public ChildClass Child { get; set; } #region IDisposable Members public void Dispose() { Console.WriteLine("Disposed is called"); } #endregion } class ChildClass : IDisposable { public string StrChild { get; set; } #region IDisposable Members public void Dispose() { Console.WriteLine("Child Disposed is called"); } #endregion } class Program { static void Main(string[] args) { TestClass test = null; test = new TestClass(); test.Child = new ChildClass(); using (TestClass test1 = test ?? new TestClass()) { test1.Number = 1; test1.Str = "hi"; test1.Child.StrChild = "Child one"; test1.Child.Dispose(); } test.Str = "hi"; test.Child.StrChild = "hi child"; Console.ReadLine(); } } In this example, "test1"gets disposed but not "test". Where as in the first case both ctx and objCtx get disposed. Any ideas what is happening here with objectContext? Thank you, Suresh

    Read the article

  • How to animate the drawing of a CGPath?

    - by Jordan Kay
    I am wondering if there is a way to do this using Core Animation. Specifically, I am adding a sub-layer to a layer-backed custom NSView and setting its delegate to another custom NSView. That class's drawInRect method draws a single CGPath: - (void)drawInRect:(CGRect)rect inContext:(CGContextRef)context { CGContextSaveGState(context); CGContextSetLineWidth(context, 12); CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, 0, 0); CGPathAddLineToPoint(path, NULL, rect.size.width, rect.size.height); CGContextBeginPath(context); CGContextAddPath(context, path); CGContextStrokePath(context); CGContextRestoreGState(context); } My desired effect would be to animate the drawing of this line. That is, I'd like for the line to actually "stretch" in an animated way. It seems like there would be a simple way to do this using Core Animation, but I haven't been able to come across any. Do you have any suggestions as to how I could accomplish this goal?

    Read the article

  • How can I change the text in a <span></span> element using jQuery?

    - by Eric Reynolds
    I have a span element as follows: <span id="download">Download</span>. This element is controlled by a few radio buttons. Basically, what I want to do is have 1 button to download the item selected by the radio buttons, but am looking to make it a little more "flashy" by changing the text inside the <span> to say more specifically what they are downloading. The span is the downloading button, and I have it animated so that the span calls slideUp(), then should change the text, then return by slideDown().Here is the code I am using that does not want to work. $("input[name=method]").change(function() { if($("input[name=method]").val() == 'installer') { $('#download').slideUp(500); $('#download').removeClass("downloadRequest").removeClass("styling").css({"cursor":"default"}); $('#download').text("Download"); $('#download').addClass("downloadRequest").addClass("styling").css({"cursor":"pointer"}); $('#download').slideDown(500); } else if($("input[name=method]").val() == 'url') { $('#download').slideUp(500); $('#download').removeClass("downloadRequest").removeClass("styling").css({"cursor":"default"}); $('#download').text("Download From Vendor Website"); $('#download').addClass("styling").addClass("downloadRequest").css({"cursor":"pointer"}); $('#download').slideDown(500); } }); I changed the code a bit to be more readable so I know that it doesn't have the short code that jQuery so eloquently allows. Everything in the code works, with the exception of the changing of the text inside the span. I'm sure its a simple solution that I am just overlooking. Any help is appreciated, Eric R.

    Read the article

  • split string error in a compiled VB.NET class

    - by Andy Payne
    I'm having some trouble compiling some VB code I wrote to split a string based on a set of predefined delimeters (comma, semicolon, colon, etc). I have successfully written some code that can be loaded inside a custom VB component (I place this code inside a VB.NET component in a plug-in called Grasshopper) and everything works fine. For instance, let's say my incoming string is "123,456". When I feed this string into the VB code I wrote, I get a new list where the first value is "123" and the second value is "456". However, I have been trying to compile this code into it's own class so I can load it inside Grasshopper separately from the standard VB component. When I try to compile this code, it isn't separating the string into a new list with two values. Instead, I get a message that says "System.String []". Do you guys see anything wrong in my compile code? You can find an screenshot image of my problem at the following link: click to see image This is the VB code for the compiled class: Public Class SplitString Inherits GH_Component Public Sub New() MyBase.New("Split String", "Split", "Splits a string based on delimeters", "FireFly", "Serial") End Sub Public Overrides ReadOnly Property ComponentGuid() As System.Guid Get Return New Guid("3205caae-03a8-409d-8778-6b0f8971df52") End Get End Property Protected Overrides ReadOnly Property Internal_Icon_24x24() As System.Drawing.Bitmap Get Return My.Resources.icon_splitstring End Get End Property Protected Overrides Sub RegisterInputParams(ByVal pManager As Grasshopper.Kernel.GH_Component.GH_InputParamManager) pManager.Register_StringParam("String", "S", "Incoming string separated by a delimeter like a comma, semi-colon, colon, or forward slash", False) End Sub Protected Overrides Sub RegisterOutputParams(ByVal pManager As Grasshopper.Kernel.GH_Component.GH_OutputParamManager) pManager.Register_StringParam("Tokenized Output", "O", "Tokenized Output") End Sub Protected Overrides Sub SolveInstance(ByVal DA As Grasshopper.Kernel.IGH_DataAccess) Dim myString As String DA.GetData(0, myString) myString = myString.Replace(",", "|") myString = myString.Replace(":", "|") myString = myString.Replace(";", "|") myString = myString.Replace("/", "|") myString = myString.Replace(")(", "|") myString = myString.Replace("(", String.Empty) myString = myString.Replace(")", String.Empty) Dim parts As String() = myString.Split("|"c) DA.SetData(0, parts) End Sub End Class This is the custom VB code I created inside Grasshopper: Private Sub RunScript(ByVal myString As String, ByRef A As Object) myString = myString.Replace(",", "|") myString = myString.Replace(":", "|") myString = myString.Replace(";", "|") myString = myString.Replace("/", "|") myString = myString.Replace(")(", "|") myString = myString.Replace("(", String.Empty) myString = myString.Replace(")", String.Empty) Dim parts As String() = myString.Split("|"c) A = parts End Sub ' ' End Class

    Read the article

  • Convert IDispatch* to a string?

    - by Rob
    I am converting an old VB COM object (which I didn't write) to C++ using ATL. One of the methods, according to the IDL, takes an IDispatch* as a parameter and the documentation and samples for this method claim that you can pass either a string (which is the progid of an object that will be created and used by the control) or an IDispatch* to an object that has already been created. How on earth do I implement this in ATL? For example, the IDL: [id(1)] HRESULT Test(IDispatch* obj); The samples (which are all JScript): obj.Test("foo.bar"); or var someObject = new ActiveXObject("foo.bar"); obj.Test(someObject); To make matters even more bizarre the actual VB code that implements this method actually declares the 'obj' parameter as a string! However, it all seems to work. Can you even pass a string to a COM method that takes an IDispatch*? If so, can I determine that the IDispatch* is actually a string in my C++ ATL code? Even better, if it's an IDispatch that implements a specific interface I will want to call methods on it, or instantiate an object if it's a string. Any ideas welcome!

    Read the article

  • Printing an invisible NSView

    - by Rodger Wilson
    Initially I created a simple program with a custom NSView. I drew a picture (certificate) and printed it! beautiful! Everything worked great! I them moved my custom NSView to an existing application. My hope was that when a user hit print it would print this certificate. Simple enough. I figured a could have a NSView pointer in my controller code. Then at initialization I would populate the pointer. Then when someone wanted to print the certificate it would print. The problem is that all of my drawing code is in the "drawRect" method. This method doesn't get called because this view is never displayed in a window. I have heart that others use non-visible NSView objects just for printing. What do I need to do. I really don't want to show this view to the screen. Rodger

    Read the article

  • JAXB does not call setter when unmarshalling objects

    - by Yaneeve
    Hi all, I am using JAXB 2.0 JDK 6 in order to unmarshall an XML instance into POJOs. In order to add some custom validation I have inserted a validation call into the setter of a property, yet despite it being private, it seems that the unmarshaller does not call the setter but directly modifies the private field. It is crucial to me that the custom validation occurs for this specific field every unmarshall call. What should I do? Code: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "LegalParams", propOrder = { "value" }) public class LegalParams { private static final Logger LOG = Logger.getLogger(LegalParams.class); @XmlTransient private LegalParamsValidator legalParamValidator; public LegalParams() { try { WebApplicationContext webApplicationContext = ContextLoader.getCurrentWebApplicationContext(); LegalParamsFactory legalParamsFactory = (LegalParamsFactory) webApplicationContext.getBean("legalParamsFactory"); HttpSession httpSession = SessionHolder.getInstance().get(); legalParamValidator = legalParamsFactory.newLegalParamsValidator(httpSession); } catch (LegalParamsException lpe) { LOG.warn("Validator related error occurred while attempting to construct a new instance of LegalParams"); throw new IllegalStateException("LegalParams creation failure", lpe); } catch (Exception e) { LOG.warn("Spring related error occurred while attempting to construct a new instance of LegalParams"); throw new IllegalStateException("LegalParams creation failure", e); } } @XmlValue private String value; /** * Gets the value of the value property. * * @return * possible object is * {@link String } * */ public String getValue() { return value; } /** * Sets the value of the value property. * * @param value * allowed object is * {@link String } * @throws TestCaseValidationException * */ public void setValue(String value) throws TestCaseValidationException { legalParamValidator.assertValid(value); this.value = value; } }

    Read the article

  • VS2008 DataSet Wizard doesn't match tables for updating

    - by James H
    Hi all, first question ever on this site. I've been having a real stubborn problem using Visual Studio 2008 and I'm hoping someone has figured this out before. I have 2 libraries and 1 project that use strongly typed datasets (MSSQL backend) that I generated using the "Configure DataSet with Wizard" option on in Data Sources. I've had them working just fine for awhile and I've written a lot of code in the non-designer file for the row classes. I've also specified a lot of custom queries using the dataset designer. This is all work I can't afford to loose. I've recently made some changes to re-organize my libraries which included changing the names of the libraries themselves. I also changed the connection string to point to a different database which is a development copy (same exact schema). Problem is now when I open up "Configure DataSet with Wizard" to pickup a new column I've added to one of the tables it no longer matches the tables correctly in the wizard. The wizard displays all of the tables in the database and none of them have check boxes next to them (ie: are not part of this dataset). Below those it shows all of the tables again but with red Xs and these are checked. Basically meaning that Visual Studio sees all of the tables it currently has in the DataSet and sees all of the tables in the database, but believes they are no longer the same and thus do not match! I've had this same thing happen quite awhile back and I think I just re-built the xsd from scratch and manually copied the code over and then had to redefine all of the custom queries I built in the dataset designer. That's not a good solution. I'm looking for 2 answers: 1. What causes this to happen and how to prevent it. 2. How do I fix this so that the wizard once again believes the tables in its xsd are the same tables that are in the database (yes, they have the exact same names still). Thanks.

    Read the article

  • How can I load txt file from internet into my jsf app?

    - by Elena
    Hi all! It's me again) I have another problem. I want to load file (for example - txt) from web. I tried to use the next code in my managed bean: public void run() { try { URL url = new URL(this.filename); URLConnection connection = url.openConnection(); bufferedReader = new BufferedReader(new InputStreamReader(connection.getInputStream())); if (bufferedReader == null) { return; } System.out.println("wwwwwwwwwwwwwwwwwwwww"); String str = bufferedReader.readLine(); System.out.println("qqqqqqqqqqqqqqqqqqqqqqq = " + str); while (bufferedReader.readLine() != null) { System.out.println("---- " + bufferedReader.readLine()); } } catch(MalformedURLException mue) { System.out.println("MalformedURLException in run() method"); mue.printStackTrace(); } catch(IOException ioe) { System.out.println("IOException in run() method"); ioe.printStackTrace(); } finally { try { bufferedReader.close(); } catch(IOException ioe) { System.out.println("UOException wile closing BufferedReader"); ioe.printStackTrace(); } } } public String doFileUpdate() { String str = FacesContext.getCurrentInstance().getExternalContext().getRequestServletPath(); System.out.println("111111111111111111111 str = " + str); str = "http://narod.ru/disk/20957166000/test.txt.html";//"http://localhost:8080/sfront/files/test.html"; System.out.println("222222222222222222222 str = " + str); FileUpdater fileUpdater = new FileUpdater(str); fileUpdater.run(); return null; } But the BufferedReader returns the html code of the current page, where i am trying to call managed bean's method. It's very strange thing - I have googled and none have had this problem. Maybe I do something wrong, maybe there us a simplest way to load file into web (jsf) app not using net API. Any ideas? Thanks very much for help! With best wishes)

    Read the article

  • Login Website, curious Cookie Problem

    - by Collin Peters
    Hello, Language: C# Development Environment: Visual Studio 2008 Sorry if the english is not perfect. I want to login to a Website and get some Data from there. My Problem is that the Cookies does not work. Everytime the Website says that I should activate Cookies but i activated the Cookies trough a Cookiecontainer. I sniffed the traffic serveral times for the login progress and I see no problem there. I tried different methods to login and I have searched if someone else have this Problem but no results... Login Page is: "www.uploaded.to", Here is my Code to Login in Short Form: private void login() { //Global CookieContainer for all the Cookies CookieContainer _cookieContainer = new CookieContainer(); //First Login to the Website HttpWebRequest _request1 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login"); _request1.Method = "POST"; _request1.CookieContainer = _cookieContainer; string _postData = "email=XXXXX&password=XXXXX"; byte[] _byteArray = Encoding.UTF8.GetBytes(_postData); Stream _reqStream = _request1.GetRequestStream(); _reqStream.Write(_byteArray, 0, _byteArray.Length); _reqStream.Close(); HttpWebResponse _response1 = (HttpWebResponse)_request1.GetResponse(); _response1.Close(); //######################## //Follow the Link from Request1 HttpWebRequest _request2 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/login?coo=1"); _request2.Method = "GET"; _request2.CookieContainer = _cookieContainer; HttpWebResponse _response2 = (HttpWebResponse)_request2.GetResponse(); _response2.Close(); //####################### //Get the Data from the Page after Login HttpWebRequest _request3 = (HttpWebRequest)WebRequest.Create("http://uploaded.to/home"); _request3.Method = "GET"; _request3.CookieContainer = _cookieContainer; HttpWebResponse _response3 = (HttpWebResponse)_request3.GetResponse(); _response3.Close(); } I'm stuck at this problem since many weeks and i found no solution that works, please help...

    Read the article

  • Extension methods conflict

    - by Yochai Timmer
    Lets say I have 2 extension methods to string, in 2 different namespaces: namespace test1 { public static class MyExtensions { public static int TestMethod(this String str) { return 1; } } } namespace test2 { public static class MyExtensions2 { public static int TestMethod(this String str) { return 2; } } } These methods are just for example, they don't really do anything. Now lets consider this piece of code: using System; using test1; using test2; namespace blah { public static class Blah { public Blah() { string a = "test"; int i = a.TestMethod(); //Which one is chosen ? } } } I know that only one of the extension methods will be chosen. Which one will it be ? and why ? How can I choose a certain method from a certain namespace ? Edit: Usually I'd use Namespace.ClassNAME.Method() ... But that just beats the whole idea of extension methods. And I don't think you can use Variable.Namespace.Method()

    Read the article

  • Unit Testing Error - The unit test adapter failed to connect to the data source or to read the data

    - by michael.lukatchik
    I'm using VSTS 2K8 and I've set up a Unit Test Project. In it, I have a test class with a method that does a simple assertion. I'm using an Excel 2007 spreadsheet as my data source. My test method looks like this: [DataSource("System.Data.Odbc", "Dsn=Excel Files;dbq=|DataDirectory|\\MyTestData.xlsx;defaultdir=C:\\TestData;driverid=1046;maxbuffersize=2048;pagetimeout=5", "Sheet1", DataAccessMethod.Sequential)] [DeploymentItem("MyTestData.xlsx")] [TestMethod()] public void State_Value_Is_Set() { string expected = "MD"; string actual = TestContext.DataRow["State"] as string; Assert.AreEqual(expected, actual); } As indicated in the method decoration attributes, my Excel spreadsheet is on my local C:/ Drive. In it, the sheet where all of my data is located is named "Sheet1". I've copied the Excel spreadsheet into my project and I've set its Build Action = "Content" and I've set its Copy to Output Directory = "Copy if Newer". When trying to run this simple unit test, I receive the following error: The unit test adapter failed to connect to the data source or to read the data. For more information on troubleshooting this error, see "Troubleshooting Data-Driven Unit Tests" (http://go.microsoft.com/fwlink/?LinkId=62412) in the MSDN Library. Error details: ERROR [42S02] [Microsoft][ODBC Excel Driver] The Microsoft Office Access database engine could not find the object 'Sheet1'. Make sure the object exists and that you spell its name and the path name correctly. I've verified that the sheet name is spelled correctly (i.e. Sheet1) and I've verified that my data sources are set correctly. Web searches haven't turned up much at all. And I'm totally stumped. All help or input is appreciated!!!!

    Read the article

  • Can't bind string containing @ char with mysqli_stmt_bind_param

    - by Tirithen
    I have a problem with my database class. I have a method that takes one prepared statement and any number of parameters, binds them to the statement, executes the statement and formats the result into a multidimentional array. Everthing works fine until I try to include an email adress in one of the parameters. The email contains an @ character and that one seems to break everything. When I supply with parameters: $types = "ss" and $parameters = array("[email protected]", "testtest") I get the error: Warning: Parameter 3 to mysqli_stmt_bind_param() expected to be a reference, value given in ...db/Database.class.php on line 63 Here is the method: private function bindAndExecutePreparedStatement(&$statement, $parameters, $types) { if(!empty($parameters)) { call_user_func_array('mysqli_stmt_bind_param', array_merge(array($statement, $types), &$parameters)); /*foreach($parameters as $key => $value) { mysqli_stmt_bind_param($statement, 's', $value); }*/ } $result = array(); $statement->execute() or debugLog("Database error: ".$statement->error); $rows = array(); if($this->stmt_bind_assoc($statement, $row)) { while($statement->fetch()) { $copied_row = array(); foreach($row as $key => $value) { if($value !== null && mb_substr($value, 0, 1, "UTF-8") == NESTED) { // If value has a nested result inside $value = mb_substr($value, 1, mb_strlen($value, "UTF-8") - 1, "UTF-8"); $value = $this->parse_nested_result_value($value); } $copied_row[$ke<y] = $value; } $rows[] = $copied_row; } } // Generate result $result['rows'] = $rows; $result['insert_id'] = $statement->insert_id; $result['affected_rows'] = $statement->affected_rows; $result['error'] = $statement->error; return $result; } I have gotten one suggestion that: the array_merge is casting parameter to string in the merge change it to &$parameters so it remains a reference So I tried that (3rd line of the method), but it did not do any difference. How should I do? Is there a better way to do this without call_user_func_array?

    Read the article

  • CGContext problems

    - by Peyman
    Hi I have a CALayer tree hierarchy A B C D where A is the view's root layer (and I create and add B,C and D layers. I am using the same delegate method `- (void) drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context to provide content to each of these layers (implemented through a switch statement in the above method). At initiation A's content is drawn (A.contents) sequentially through these methods `- (void) drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context -(void) drawCircle:(CALayer *) theLayer inContext:(CGContextRef)context where drawCircle does CGContextSaveGState(context); / draws circle and other paths here / CGContextRestoreGState(context); CGImageRef contentImage = CGBitmapContextCreateImage(context); theLayer.contents = (id) contentImage; CGImageRelease(contentImage); (i.e I save the Context, do the drawing, restore the context, make a bitmap of the context, update the layer's contents, in this case A's, then release the contentimage). When the user then clicks somewhere in the circle at touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event delegate method I then try to paint the content of B by (still in touchesEnded:) CGContextRef context = UIGraphicsGetCurrentContext(); [self drawLayer:self.B inContext:context]; [self.B setNeedsDisplay]; the setNeedsDisplay call the delegate - drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context again but this time the switch statement (using a layer flag) calls [self drawCircle:theLayer inContext:context]; with color red. The problem I am facing is that when -drawCircle: inContext: is called I get a long list of CGContext errors, starting with <Error>: CGContextSaveGState: invalid context and ending with CGBitmapContextCreateImage: invalid context I played around with making context the view's ivar and it worked so i am sure the context is the problem but I don't know what. I've tried CGContextFlush but it didn't help. Any help would be appreciated thank you

    Read the article

  • handling java exception

    - by Noona
    This questions is related to java exception, why are there some cases that when an exception thrown the program exits even though the exception was caught and there was no exit() statement? my code looks something like this void bindProxySocket(DefaultHttpClientConnection proxyConnection, String hostName, HttpParams params) { if (!proxyConnection.isOpen()) { Socket socket = null; try { socket = new Socket(hostName, 80); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { proxyConnection.bind(socket, params); } catch(IOException e) { System.err.println ("couldn't bind socket"); e.printStackTrace(); } } } and then I call this method like this: bindProxySocket(proxyConn, hostName, params1); but, the program exits, although I want to handle the exception by doing something else, can it be because I didn't enclose the method call within a try catch clause? what happens if I catch the exception again even though it's already in the method? and what should I do if i want to clean resources only if an exception occurs and otherwise I want to continue with the program? I am guessing in this case I have to include the whole piece of code until I can clean the resources with in a try statement or can I do it in the handle exception statement? some of these questions are on this specific case, but I would like to get a thorough answer to all my questions for future reference. thanks

    Read the article

  • Generate number sequences with LINQ

    - by tanascius
    I try to write a LINQ statement which returns me all possible combinations of numbers (I need this for a test and I was inspired by this article of Eric Lippert). The method's prototype I call looks like: IEnumerable<Collection<int>> AllSequences( int start, int end, int size ); The rules are: all returned collections have a length of size number values within a collection have to increase every number between start and end should be used So calling the AllSequences( 1, 5, 3 ) should result in 10 collections, each of size 3: 1 2 3 1 2 4 1 2 5 1 3 4 1 3 5 1 4 5 2 3 4 2 3 5 2 4 5 3 4 5 Now, somehow I'd really like to see a pure LINQ solution. I am able to write a non LINQ solution on my own, so please put no effort into a solution without LINQ. My tries so far ended at a point where I have to join a number with the result of a recursive call of my method - something like: return from i in Enumerable.Range( start, end - size + 1 ) select BuildCollection(i, AllSequences( i, end, size -1)); But I can't manage it to implement BuildCollection() on a LINQ base - or even skip this method call. Can you help me here?

    Read the article

  • Using Build Manager Class to Load ASPX Files and Populate its Controls

    - by Sandhurst
    I am using BuildManager Class to Load a dynamically generated ASPX File, please note that it does not have a corresponding .cs file. Using Following code I am able to load the aspx file, I am even able to loop through the control collection of the dynamically created aspx file, but when I am assigning values to controls they are not showing it up. for example if I am binding the value "Dummy" to TextBox control of the aspx page, the textbox remains empty. Here's the code that I am using protected void Page_Load(object sender, EventArgs e) { LoadPage("~/Demo.aspx"); } public static void LoadPage(string pagePath) { // get the compiled type of referenced path Type type = BuildManager.GetCompiledType(pagePath); // if type is null, could not determine page type if (type == null) throw new ApplicationException("Page " + pagePath + " not found"); // cast page object (could also cast an interface instance as well) // in this example, ASP220Page is a custom base page System.Web.UI.Page pageView = (System.Web.UI.Page)Activator.CreateInstance(type); // call page title pageView.Title = "Dynamically loaded page..."; // call custom property of ASP220Page //pageView.InternalControls.Add( // new LiteralControl("Served dynamically...")); // process the request with updated object ((IHttpHandler)pageView).ProcessRequest(HttpContext.Current); LoadDataInDynamicPage(pageView); } private static void LoadDataInDynamicPage(Page prvPage) { foreach (Control ctrl in prvPage.Controls) { //Find Form Control if (ctrl.ID != null) { if (ctrl.ID.Equals("form1")) { AllFormsClass cls = new AllFormsClass(); DataSet ds = cls.GetConditionalData("1"); foreach (Control ctr in ctrl.Controls) { if (ctr is TextBox) { if (ctr.ID.Contains("_M")) { TextBox drpControl = (TextBox)ctr; drpControl.Text = ds.Tables[0].Rows[0][ctr.ID].ToString(); } else if (ctr.ID.Contains("_O")) { TextBox drpControl = (TextBox)ctr; drpControl.Text = ds.Tables[1].Rows[0][ctr.ID].ToString(); } } } } } } }

    Read the article

< Previous Page | 530 531 532 533 534 535 536 537 538 539 540 541  | Next Page >