Search Results

Search found 5658 results on 227 pages for 'eric fail'.

Page 185/227 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Python: Best way to check for Python version in program that uses new language features?

    - by Mark Harrison
    If I have a python script that requires at least a particular version of python, what is the correct way to fail gracefully when an earlier version of python is used to launch the script? How do I get control early enough to issue an error message and exit? For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks (new in 2.6). I wrote a simple little interpreter-version checker routine which is the first thing the script would call ... except it doesn't get that far. Instead, the script fails during python compilation, before my routines are even called. Thus the user of the script sees some very obscure synax error tracebacks - which pretty much require an expert to deduce that it is simply the case of running the wrong version of python. update I know how to check the version of python. The issue is that some syntax is illegal in older versions of python. Consider this program: import sys if sys.version_info < (2, 4): raise "must use python 2.5 or greater" else: # syntax error in 2.4, ok in 2.5 x = 1 if True else 2 print x When run under 2.4, I want this result $ ~/bin/python2.4 tern.py must use python2.5 or greater and not this result: $ ~/bin/python2.4 tern.py File "tern.py", line 5 x = 1 if True else 2 ^ SyntaxError: invalid syntax (channeling for a coworker)

    Read the article

  • Routing is using ID has action and action has ID after submitting a form

    - by Victor Martins
    I have a model User that has_one user_profile and a User_Profile belongs_to user in the User controller I have: def personal @user = User.find_by_id(params[:id]) @user_profile = @user.user_profile @user_profile ||= @user.build_user_profile end def update_personal @user = User.find_by_id(params[:id]) if @user.user_profile.update_attributes(params[:user_profile]) flash[:notice] = "OK" redirect_to @user else flash[:notice] = "Fail" render :action => 'update_personal' end end In my personal.html.erb view I have: <% semantic_form_for @user_profile, :url => { :action => "update_personal"} do |form| %> <%= form.inputs %> <%= form.buttons %> <%end%> And on the rountes I have: map.resources :users, :member => { :personal => :get, :update_personal => :put } Now the strange thing is that I can do: users/1/personal to see the form but when I submit I get this error: Unknown action No action responded to 1. It's trying to find an action with the name 1. Can anyone point me out on the right direction?

    Read the article

  • Setting attributes of a class during construction from **kwargs

    - by Carson Myers
    Python noob here, Currently I'm working with SQLAlchemy, and I have this: from __init__ import Base from sqlalchemy.schema import Column, ForeignKey from sqlalchemy.types import Integer, String from sqlalchemy.orm import relationship class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True) username = Column(String, unique=True) email = Column(String) password = Column(String) salt = Column(String) openids = relationship("OpenID", backref="users") User.__table__.create(checkfirst=True) #snip definition of OpenID class def create(**kwargs): user = User() if "username" in kwargs.keys(): user.username = kwargs['username'] if "email" in kwargs.keys(): user.username = kwargs['email'] if "password" in kwargs.keys(): user.password = kwargs['password'] return user This is in /db/users.py, so it would be used like: from db import users new_user = users.create(username="Carson", password="1234") new_user.email = "[email protected]" users.add(new_user) #this function obviously not defined yet but the code in create() is a little stupid, and I'm wondering if there's a better way to do it that doesn't require an if ladder, and that will fail if any keys are added that aren't in the User object already. Like: for attribute in kwargs.keys(): if attribute in User: user.__attribute__[attribute] = kwargs[attribute] else: raise Exception("blah") that way I could put this in its own function (unless one hopefully already exists?) So I wouldn't have to do the if ladder again and again, and so I could change the table structure without modifying this code. Any suggestions?

    Read the article

  • How to TDD Asynchronous Events?

    - by Padu Merloti
    The fundamental question is how do I create a unit test that needs to call a method, wait for an event to happen on the tested class and then call another method (the one that we actually want to test)? Here's the scenario if you have time to read further: I'm developing an application that has to control a piece of hardware. In order to avoid dependency from hardware availability, when I create my object I specify that we are running in test mode. When that happens, the class that is being tested creates the appropriate driver hierarchy (in this case a thin mock layer of hardware drivers). Imagine that the class in question is an Elevator and I want to test the method that gives me the floor number that the elevator is. Here is how my fictitious test looks like right now: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += TestElevatorArrived; elevator.GoToFloor(5); //Here's where I'm getting lost... I could block //until TestElevatorArrived gives me a signal, but //I'm not sure it's the best way int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } Edit: Thanks for all the answers. This is how I ended up implementing it: [TestMethod] public void TestGetCurrentFloor() { var elevator = new Elevator(Elevator.Environment.Offline); elevator.ElevatorArrivedOnFloor += (s, e) => { Monitor.Pulse(this); }; lock (this) { elevator.GoToFloor(5); if (!Monitor.Wait(this, Timeout)) Assert.Fail("Elevator did not reach destination in time"); int floor = elevator.GetCurrentFloor(); Assert.AreEqual(floor, 5); } }

    Read the article

  • Using pam_python in a script running with mod_python

    - by markys
    Hi ! I would like to develop a web interface to allow users of a Linux system to do certain tasks related to their account. I decided to write the backend of the site using Python and mod_python on Apache. To authenticate the users, I thought I could use python_pam to query the PAM service. I adapted the example bundled with the module and got this: # out is the output stream used to print debug def auth(username, password, out): def pam_conv(aut, query_list, user_data): out.write("Query list: " + str(query_list) + "\n") # List to store the responses to the different queries resp = [] for item in query_list: query, qtype = item # If PAM asks for an input, give the password if qtype == PAM.PAM_PROMPT_ECHO_ON or qtype == PAM.PAM_PROMPT_ECHO_OFF: resp.append((str(password), 0)) elif qtype == PAM.PAM_PROMPT_ERROR_MSG or qtype == PAM.PAM_PROMPT_TEXT_INFO: resp.append(('', 0)) out.write("Our response: " + str(resp) + "\n") return resp # If username of password is undefined, fail if username is None or password is None: return False service = 'login' pam_ = PAM.pam() pam_.start(service) # Set the username pam_.set_item(PAM.PAM_USER, str(username)) # Set the conversation callback pam_.set_item(PAM.PAM_CONV, pam_conv) try: pam_.authenticate() pam_.acct_mgmt() except PAM.error, resp: out.write("Error: " + str(resp) + "\n") return False except: return False # If we get here, the authentication worked return True My problem is that this function does not behave the same wether I use it in a simple script or through mod_python. To illustrate this, I wrote these simple cases: my_username = "markys" my_good_password = "lalala" my_bad_password = "lololo" def handler(req): req.content_type = "text/plain" req.write("1- " + str(auth(my_username,my_good_password,req) + "\n")) req.write("2- " + str(auth(my_username,my_bad_password,req) + "\n")) return apache.OK if __name__ == "__main__": print "1- " + str(auth(my_username,my_good_password,sys.__stdout__)) print "2- " + str(auth(my_username,my_bad_password,sys.__stdout__)) The result from the script is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] 1- True Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False but the result from mod_python is : Query list: [('Password: ', 1)] Our response: [('lalala', 0)] Error: ('Authentication failure', 7) 1- False Query list: [('Password: ', 1)] Our response: [('lololo', 0)] Error: ('Authentication failure', 7) 2- False I don't understand why the auth function does not return the same value given the same inputs. Any idea where I got this wrong ? Here is the original script, if that could help you. Thanks a lot !

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • How do I correctly detect presence of FLTK using SCONS?

    - by James Morris
    I'm trying to build an application I've downloaded which uses the SCONS "make replacement" and the Fast Light Tool Kit Gui. The SConstruct code to detect the presence of fltk is: guienv = Environment(CPPFLAGS = '') guiconf = Configure(guienv) if not guiconf.CheckLibWithHeader('lo', 'lo/lo.h','c'): print 'Did not find liblo for OSC, exiting!' Exit(1) if not guiconf.CheckLibWithHeader('fltk', 'FL/Fl.H','c++'): print 'Did not find FLTK for the gui, exiting!' Exit(1) Unfortunately, on my (Gentoo Linux) system, and many others (Linux distributions) this can be quite troublesome if the package manager allows the simultaneous install of FLTK-1 and FLTK-2. Being Idealistic, I attempted to modify the SConstruct file to use fltk-config --cflags and fltk-config --ldflags (or fltk-config --libs might be better than ldflags) by adding them like so: guienv.Append(CPPPATH = os.popen('fltk-config --cflags').read()) guienv.Append(LIBPATH = os.popen('fltk-config --ldflags').read()) But this causes the test for liblo to fail! Looking in config.log shows how it failed: scons: Configure: Checking for C library lo... gcc -o .sconf_temp/conftest_4.o -c "-I/usr/include/fltk-1.1 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_THREAD_SAFE -D_REENTRANT" gcc: no input files scons: Configure: no How should this really be done?

    Read the article

  • Having vCam on custom classes instead of the root class.

    - by Hwang
    Maybe some of you guys know bout vCam from http://bryanheisey.com/blog/?page_id=22 I'm trying to have the script running on a custom classes instead of a MovieClip in the library. But after some trying I fail so I stick back to having the MC in the library and load the MC from the project root action-script files. Now it works fine if I run the MC on the root as files, but for more organizing purposes on my action-script files, I was thinking of calling it from a custom classes(where I can control the vCam), then call the custom classes from the root action-script files. But seems like it won't work other than the root action-script files. I'm not sure whether I'm missing any codes between the two custom classes, or its not coded to run that way. If it's not, then its fine too just that I want the things more organize. Or if you have any idea how to 'by-pass' this, please do tell me so. In case you need my code for the 2 classes, here it is: package { import flash.display.MovieClip; import classes.vCamera; public class main extends MovieClip { private var vC2:vCamera = new vCamera(); public function main():void { addChild(vC2) } } } package classes{ import flash.display.MovieClip; import flash.display.Stage; import flash.events.Event; public class vCamera extends MovieClip{ private var vC:vCam = new vCam(); public function vCamera():void{ addEventListener(Event.ADDED_TO_STAGE, add2Stage) } private function add2Stage(event:Event):void{ vC.x=stage.stageWidth/2; vC.y=stage.stageHeight/2; vC.rotation=15; addChild(vC); } } }

    Read the article

  • Complete failure to compile when include CSS Friendly Adapters

    - by david
    Background - I am trying to use the friendly adapters to override the default styling for the standard asp.net menu control that is used by an existing project. The existing project functions normally and compiles when requested without incident. Adding in the code for the for the CSS Friendly adapter and not only does it not compile, but it never even really starts. The Problem in Detail - I am using the sample code from Scott on this page: http://weblogs.asp.net/scottgu/archive/2006/09/08/CSS-Control-Adapter-Toolkit-Update.aspx. The sample project compiles fine, just within the existing project does it fail. Fails without a line number or any other traceable info. It definately appears to be related to the CSSMenuAdapter.browser file, which has been referenced by others online as the cause of similar error. I have tried addind and readding, using as a dll, using as a code file in app code, etc. I am working with aspdotnetstorefront in this case, although it is not unique to them as I have found other references in software packages online. Only thing is, no one ever says what solved the issue. I am using Windows 7, VS2008 Express and SQL Express 2008 R2. The full error msg is: Error 10 Exception of type 'System.OutOfMemoryException' was thrown. Notice that there is no file, line, or column info. Really need some help here. I have been working on this a long time. This really should have tag: cssfriendlyadapter but I could not create that.

    Read the article

  • AJAX Submit to PHP still loading page after preventDefault()

    - by dannyburrows
    I have a webpage that I am using .ajax to send variables to a php script. The php page is loading into the browser as if I was navigating to it. The script is loading the data correctly, so my issue is stopping the navigation and keeping the user on the original page. The code for my form is here: echo "<form method='post' action='addTask.php' id='myform'>\n"; echo "<input name='addtask' id='addtask' maxlength='64'/><br/>\n"; echo "<input type='submit' name='submit' id='submit' value='Add Task'/>\n"; echo "</form>\n"; The code for my jquery is here: $(function(){ $('#myform').submit(function(e){ e.stopPropagation(); $.ajax({ url: 'addTask.php', type: 'POST', data: {}, success: alert("Success") }); }); }); I have tried: e.preventDefault(), e.stopPropagation() and return false. Any help is appreciated. $("#submit").click(function(e){ e.preventDefault(); $.ajax({ type: "POST", url: "addtask.php", data: { } }) .done(function() { alert( "success" ); }) .fail(function() { alert( "error" ); }); and $(function(){ $('#myform').submit(function(e){ e.preventDefault(); $.ajax({ url: 'addTask.php', type: 'POST', data: {}, success: function(){alert("Success")} }); }); });

    Read the article

  • Creating a property setter delegate

    - by Jim C
    I have created methods for converting a property lambda to a delegate: public static Delegate MakeGetter<T>(Expression<Func<T>> propertyLambda) { var result = Expression.Lambda(propertyLambda.Body).Compile(); return result; } public static Delegate MakeSetter<T>(Expression<Action<T>> propertyLambda) { var result = Expression.Lambda(propertyLambda.Body).Compile(); return result; } These work: Delegate getter = MakeGetter(() => SomeClass.SomeProperty); object o = getter.DynamicInvoke(); Delegate getter = MakeGetter(() => someObject.SomeProperty); object o = getter.DynamicInvoke(); but these won't compile: Delegate setter = MakeSetter(() => SomeClass.SomeProperty); setter.DynamicInvoke(new object[]{propValue}); Delegate setter = MakeSetter(() => someObject.SomeProperty); setter.DynamicInvoke(new object[]{propValue}); The MakeSetter lines fail with "The type arguments cannot be inferred from the usage. Try specifying the type arguments explicitly." Is what I'm trying to do possible? Thanks in advance.

    Read the article

  • How to enforce foreign keys using Xerial SQLite JDBC?

    - by Space_C0wb0y
    According to their release notes, the Xerial SQLite JDBC driver supports foreign keys since version 3.6.20.1. I have tried some time now to get a foreign key constraint to be enforced, but to no avail. Here is what I came up with: public static void main(String[] args) throws ClassNotFoundException, SQLException { Class.forName("org.sqlite.JDBC"); SQLiteConfig config = new SQLiteConfig(); config.enforceForeignKeys(true); Connection connection = DriverManager.getConnection("jdbc:sqlite::memory:", config.toProperties()); connection.createStatement().executeUpdate( "CREATE TABLE artist(" + "artistid INTEGER PRIMARY KEY, " + "artistname TEXT);"); connection.createStatement().executeUpdate( "CREATE TABLE track("+ "trackid INTEGER," + "trackname TEXT," + "trackartist INTEGER," + "FOREIGN KEY(trackartist) REFERENCES artist(artistid)" + ");"); connection.createStatement().executeUpdate( "INSERT INTO track VALUES(14, 'Mr. Bojangles', 3)"); } The table definitions are taken directly from the sample in the SQLite documentation. This is supposed to fail, but it doesn't. I also checked, and it really inserts the tuple (no ignore or something like that). Does anyone have any experience with that, or knows how to make it work?

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Deleting objects with FK constraints in Spring/Hibernate

    - by maxdj
    This seems like such a simple scenario to me, yet I cannot for the life of my find a solution online or in print. I have several objects like so (trimmed down): @Entity public class Group extends BaseObject implements Identifiable<Long> { private Long id; private String name; private Set<HiringManager> managers = new HashSet<HiringManager>(); private List<JobOpening> jobs; @ManyToMany(fetch=FetchType.EAGER) @JoinTable( name="group_hiringManager", joinColumns=@JoinColumn(name="group_id"), inverseJoinColumns=@JoinColumn(name="hiringManager_id") ) public Set<HiringManager> getManagers() { return managers; } @OneToMany(mappedBy="group", fetch=FetchType.EAGER) public List<JobOpening> getJobs() { return jobs; } } @Entity public class JobOpening extends BaseObject implements Identifiable<Long> { private Long id; private String name; private Group group; @ManyToOne @JoinColumn(name="group_id", updatable=false, nullable=true) public Group getGroup() { return group; } } @Entity public class HiringManager extends User { @ManyToMany(mappedBy="managers", fetch=FetchType.EAGER) public Set<Group> getGroups() { return groups; } } Say I want to delete a Group object. Now there are dependencies on it in the JobOpening table and in the group_hiringManager table, which cause the delete function to fail. I don't want to cascade the delete, because the managers have other groups, and the jobopenings can be groupless. I have tried overriding the remove() function of my GroupManager to remove the dependencies, but it seems like no matter what I do they persist, and the delete fails! What is the right way to remove this object?

    Read the article

  • Sensible unit test possible?

    - by nkr1pt
    Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists? I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code. I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed. How would you tackle this? public class Extractor { private EventBus eventBus; private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()}; private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{}; private ExtractCommand[] macExtractCommands = new ExtractCommand[]{}; @Inject public Extractor(EventBus eventBus) { this.eventBus = eventBus; } public boolean extract(DownloadCandidate downloadCandidate) { for (ExtractCommand command : getSystemSpecificExtractCommands()) { if (command.extract(downloadCandidate)) { eventBus.fireEvent(this, new ExtractCompletedEvent()); return true; } } eventBus.fireEvent(this, new ExtractFailedEvent()); return false; } private ExtractCommand[] getSystemSpecificExtractCommands() { String os = System.getProperty("os.name"); if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return linuxExtractCommands; } else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return windowsExtractCommands; } else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return macExtractCommands; } return null; } }

    Read the article

  • How to Avoid PHP Object Nesting/Creation Limit?

    - by Will Shaver
    I've got a handmade ORM in PHP that seems to be bumping up against an object limit and causing php to crash. Here's a simple script that will cause crashes: <? class Bob { protected $parent; public function Bob($parent) { $this->parent = $parent; } public function __toString() { if($this->parent) return (string) "x " . $this->parent; return "top"; } } $bobs = array(); for($i = 1; $i < 40000; $i++) { $bobs[] = new Bob($bobs[$1 -1]); } ?> Even running this from the command line will cause issues. Some boxes take more than 40,000 objects. I've tried it on Linux/Appache (fail) but my app runs on IIS/FastCGI. On FastCGI this causes the famous "The FastCGI process exited unexpectedly" error. Obviously 20k objects is a bit high, but it crashes with far fewer objects if they have data and nested complexity. Fast CGI isn't the issue - I've tried running it from the command line. I've tried setting the memory to something really high - 6,000MB and to something really low - 24MB. If I set it low enough I'll get the "allocated memory size xxx bytes exhausted" error. I'm thinking that it has to do with the number of functions that are called - some kind of nesting prevention. I didn't think that my ORM's nesting was that complicated but perhaps it is. I've got some pretty clear cases where if I load just ONE more object it dies, but loads in under 3 seconds if it works.

    Read the article

  • javascript XSL in google chrome

    - by Guy
    Hi, I'm using the following javascript code to display xml/xsl: function loadXMLDoc(fname) { var xmlDoc; // code for IE if (window.ActiveXObject) { xmlDoc=new ActiveXObject("Microsoft.XMLDOM"); } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xmlDoc=document.implementation.createDocument("","",null); } else { alert('Your browser cannot handle this script'); } try { xmlDoc.async=false; xmlDoc.load(fname); return(xmlDoc); } catch(e) { try //Google Chrome { var xmlhttp = new window.XMLHttpRequest(); xmlhttp.open("GET",file,false); xmlhttp.send(null); xmlDoc = xmlhttp.responseXML.documentElement; return(xmlDoc); } catch(e) { error=e.message; } } } function displayResult() { xml=loadXMLDoc("report.xml"); xsl=loadXMLDoc("report.xsl"); // code for IE if (window.ActiveXObject) { ex=xml.transformNode(xsl); document.getElementById("example").innerHTML=ex; } // code for Mozilla, Firefox, Opera, etc. else if (document.implementation && document.implementation.createDocument) { xsltProcessor=new XSLTProcessor(); xsltProcessor.importStylesheet(xsl); resultDocument = xsltProcessor.transformToFragment(xml,document); document.getElementById("example").appendChild(resultDocument); } } It works find for IE and Firefox but chrome is fail in the line: document.getElementById("example").appendChild(resultDocument); Thank you for you help

    Read the article

  • Accesing WinCE ComboBox DroppedDown property (.NET CF 2.0)

    - by PabloG
    I'm implementing custom behavior sub-classing the form controls, but I cannot manage to access the DroppedDown property of the ComboBox. Looking in the help, it's supposed to be supported in CF.NET 2.0: using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Data; using System.Text; using System.Windows.Forms; namespace xCustomControls { public partial class xComboBox : System.Windows.Forms.ComboBox { private ComboBox comboBox1; public xComboBox() { InitializeComponent(); this.KeyDown += new KeyEventHandler(this.KeyDownHandler); } private void KeyDownHandler(object sender, KeyEventArgs e) { // DroppedDown doesn't appear in the IntelliSense of ComboBox. // or this.comboBox1. if (((ComboBox)sender).DroppedDown) // fail! return; switch (e.KeyData) { case Keys.Up: case Keys.Enter: case Keys.Down: e.Handled = true; this.Parent.SelectNextControl((Control)sender, e.KeyData != Keys.Up, true, true, true); ... fails with 'System.Windows.Forms.ComboBox' does not contain a definition for 'DroppedDown' and no extension method 'DroppedDown' accepting a first argument of type 'System.Windows.Forms.ComboBox' could be found How can I access the property? TIA, Pablo

    Read the article

  • Using map/reduce for mapping the properties in a collection

    - by And
    Update: follow-up to MongoDB Get names of all keys in collection. As pointed out by Kristina, one can use Mongodb 's map/reduce to list the keys in a collection: db.things.insert( { type : ['dog', 'cat'] } ); db.things.insert( { egg : ['cat'] } ); db.things.insert( { type : [] }); db.things.insert( { hello : [] } ); mr = db.runCommand({"mapreduce" : "things", "map" : function() { for (var key in this) { emit(key, null); } }, "reduce" : function(key, stuff) { return null; }}) db[mr.result].distinct("_id") //output: [ "_id", "egg", "hello", "type" ] As long as we want to get only the keys located at the first level of depth, this works fine. However, it will fail retrieving those keys that are located at deeper levels. If we add a new record: db.things.insert({foo: {bar: {baaar: true}}}) And we run again the map-reduce +distinct snippet above, we will get: [ "_id", "egg", "foo", "hello", "type" ] But we will not get the bar and the baaar keys, which are nested down in the data structure. The question is: how do I retrieve all keys, no matter their level of depth? Ideally, I would actually like the script to walk down to all level of depth, producing an output such as: ["_id","egg","foo","foo.bar","foo.bar.baaar","hello","type"] Thank you in advance!

    Read the article

  • PayPal integration woes: PDT hangs on return to site

    - by Tom
    Hi, I'm implementing PayPal IPN & PDT. After some headache & time at the sandbox, IPN is working well and PDT returns the correct $_GET data. The implementation is as follows: Pass user ID in form to PayPal User buys product and triggers IPN which updates database for given user ID PDT returns transaction ID when user returns to site The return page says "please wait" and repeat-Ajax-checks for the transaction status User is redirected to success/failure page Everything works well, EXCEPT that when using the PayPal ready PHP code for PDT to do a return POST, the page hangs. PayPal waits for a response and the user never gets back to my site. I'm not getting a fail status, just nothing. The funny thing is that once the unknown error occurs, my test domain becomes unresponsive for a short period. The code (PHP): https://www.paypal.com/us/cgi-bin/webscr?cmd=p/xcl/rec/pdt-code-outside If I comment out the POST back, it all works fine. I'm able to pin down the problem to once the code enters the while{} loop. Unfortunately, I'm not experienced enough to write a replacement from scratch for the PayPal code, so would really appreciate any ideas on what might be wrong. The POST back goes to ssl://www.sandbox.paypal.com, and I'm using button code and an authorisation token that have all been created via a sandbox test account. Thanks in advance.

    Read the article

  • I need some help cropping an image in PHP (GD)

    - by evan
    http://i.imgur.com/foT9u.jpg Using that image as an example, here's what I need to do: Crop the blue square to have the same proportional ratio as that of the black square From doing that, I should then be able to resize the blue square to fit into the black square without losing stretching it - It'll retain its proportions. Note: The blue square must be cropped 'from the center'. The original center should remain the center after the crop (it can't be cropped from the top left, for example). Here's what I'm thinking needs to be done (using the, landscape, blue square as the example): Figure out the difference between the black squares width and height Figure out the difference between the blue squares width and height This should tell me how much to crop the blue square by and with how much of a 'top offset' Once it's cropped to fit the black squares proportions, it can then be resized I've been messing around with code similar to: if (BLACK_WIDTH > BLACK_HEIGHT) { $diffHeight = BLACK_WIDTH - BLACK_HEIGHT; $newHeight = $blue_Height - $blue_Height; echo $newHeight; } And using Photoshop to try and get a feel for how this should be done, but it continues to fail .< How should I go about doing this? How can I figure out how much to crop by (depending on if the blue square is landscape or portrait)? How do I then get the offset to retain the blue squares center?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • WebRequest using SSL

    - by pm_2
    I have the following code to retrieve a file using FTP (which works fine). FtpWebRequest request = (FtpWebRequest)WebRequest.Create(svrPath); request.KeepAlive = true; request.UsePassive = true; request.UseBinary = true; request.Method = WebRequestMethods.Ftp.DownloadFile; request.Credentials = new NetworkCredential(uname, passw); using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) using (Stream responseStream = response.GetResponseStream()) using (StreamReader reader = new StreamReader(responseStream)) using (StreamWriter destination = new StreamWriter(destinationFile)) { destination.Write(reader.ReadToEnd()); destination.Flush(); } However, when I try to do this using SSL, I am unable to access the file, as follows: FtpWebRequest request = (FtpWebRequest)WebRequest.Create(svrPath); request.KeepAlive = true; request.UsePassive = true; request.UseBinary = true; // The following line causes the download to fail request.EnableSsl = true; request.Method = WebRequestMethods.Ftp.DownloadFile; request.Credentials = new NetworkCredential(uname, passw); using (FtpWebResponse response = (FtpWebResponse)request.GetResponse()) using (Stream responseStream = response.GetResponseStream()) using (StreamReader reader = new StreamReader(responseStream)) using (StreamWriter destination = new StreamWriter(destinationFile)) { destination.Write(reader.ReadToEnd()); destination.Flush(); } Can anyone tell me why the latter would not work?

    Read the article

  • Why would a variable in Scala code mysteriously become null?

    - by Alex R
    I've isolated the problem down to this: Predef.println("the value of argv1 here is " + argv(1)); var n: $ = undef; n = argv(1); Predef.println("the value of argv1 here is " + argv(1)); Predef.println("the value of n here is " + n); Predef.println("the class of n here is " + n.getClass); Here's the definition of $: class $ { println("constructed a new $ of type: " + this.getClass); def value: $ = this; def toValue: Value = { new ConstStringValue(this.toString()) }; def -(sym: Symbol): $ = { println("looked up: " + sym); this } def -(sym: $): $ = { println("looked up: " + sym); this } def update(sym: Symbol, any: Any) { println("update called: " + sym + "=" + any); } def apply(sym: Symbol) = { this } def apply(obj: $) = { this } def apply() = { this } def +(o:$) = this.toValue.div(o.toValue) def *(o:$) = this.toValue.mul(o.toValue) def >(o:$) = this.toValue.gt(o.toValue) def <(o:$) = this.toValue.lt(o.toValue) def ++() = { this } def -=(o:$) = { this } } When run, the code prints: the value of argv1 here is 10 the value of argv1 here is 10 the value of n here is null java.lang.NullPointerException at test_1_php$.include(_tmp.scala:149) at php.script.main(php.scala:57) at test_1_php.main(_tmp.scala) [...] Why would n mysteriously lose its value (or fail to take one on)?

    Read the article

  • Auto-resolving a hostname in WCF Metadata Publishing

    - by Mike C
    I am running a self-hosted WCF service. In the service configuration, I am using localhost in my BaseAddresses that I hook my endpoints to. When trying to connect to an endpoint using the WCF test client, I have no problem connecting to the endpoint and getting the metadata using the machine's name. The problem that I run into is that the client that is generated from metadata uses localhost in the endpoint URLs it wants to connect to. I'm assuming that this is because localhost is the endpoint URL published by metadata. As a result, any calls to the methods on the service will fail since localhost on the calling machine isn't running the service. What I would like to figure out is if it is possible for the service metadata to publish the proper URL to a client depending on the client who is calling it. For example, if I was requesting the service metadata from a machine on the same network as the server the endpoint should be net.tcp://MYSERVER:1234/MyEndpoint. If I was requesting it from a machine outside the network, the URL should be net.tcp://MYSERVER.mydomain.com:1234/MyEndpoint. And obviously if the client was on the same machine, THEN the URL could be net.tcp://localhost:1234/MyEndpoint. Is this just a flaw in the default IMetadataExchange contract? Is there some reason the metadata needs to publish the information in a non-contextual way? Is there another way I should be configuring my BaseAddresses in order to get the functionality I want? Thanks, Mike

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >